ACA-BIGDATA1 Exam Questions To Pass ACA Big Data Certification Exam

Do you want to pass ACA Big Data Certification Exam? PassQuestion provides very comprehensive ACA-BIGDATA1 Exam Questions to help candidates become familiar with the ACA-BIGDATA1 Exam Questions that they will need to answer during the real exam. It is usually the apprehension about the unknown questions that makes the candidates nervous about their exams. With the help of our ACA-BIGDATA1 Exam Questions, we will help you tackle your ACA-BIGDATA1 exam with a lot of confidence.

Practice Online ACA Big Data Certification Exam ACA-BIGDATA1 Free Questions

1. A business flow in DataWorks integrates different node task types by business type, such a structure

improves business code development facilitation.

Which of the following descriptions about the node type is INCORRECT? Score 2

 
 
 
 
 

2. DataV is a powerful yet accessible data visualization tool, which features geographic information systems allowing for rapid interpretation of data to understand relationships, patterns, and trends. When a DataV screen is ready, it can embed works to the existing portal of the enterprise through ______.

 
 
 
 

3. DataWorks can be used to develop and configure data sync tasks.

Which of the following statements are correct? (Number of correct answers: 3) Score 2

 
 
 
 

4. You are working on a project where you need to chain together MapReduce, Hive jobs. You also need the ability to use forks, decision points, and path joins.

Which ecosystem project should you use to perform these actions? Score 2

 
 
 
 

5. MaxCompute supports two kinds of charging methods: Pay-As-You-Go and Subscription (CU cost). Pay-As-You-Go means each task is measured according to the input size by job cost. In this charging method the billing items do not include charges due to ______. Score 2

 
 
 
 

6. In MaxCompute, if error occurs in Tunnel transmission due to network or Tunnel service, the user can resume the last update operation through the command tunnel resume; Score 1

 
 

7. You are working on a project where you need to chain together MapReduce, Hive jobs. You also need the ability to use forks, decision points, and path joins.

Which ecosystem project should you use to perform these actions?

 
 
 
 

8. Where is the meta data(e.g.,table schemas) in Hive? Score 2

 
 
 
 

9. Scenario: Jack is the administrator of project prj1. The project involves a large volume of sensitive data such as bank account, medical record, etc. Jack wants to properly protect the data.

Which of the follow statements is necessary?

 
 
 
 

10. Resource is a particular concept of MaxCompute. If you want to use user-defined function UDF or MapReduce, resource is needed. For example: After you have prepared UDF, you must upload the compiled jar package to MaxCompute as resource.

Which of the following objects are MaxCompute resources? (Number of correct answers: 4)

Score 2

 
 
 
 
 

11. Which of the following is not proper for granting the permission on a L4 MaxCompute table to a user. (L4 is a level in MaxCompute Label-based security (LabelSecurity), it is a required MaxCompute Access Control (MAC) policy at the project space level. It allows project administrators to control the user access to column-level sensitive data with improved flexibility.) Score 2

 
 
 
 

12. Synchronous development in DataWorks provides both wizard and script modes. Score 1

 
 

13. Alibaba Cloud Quick BI reporting tools support a variety of data sources, facilitating users to analyze

and present their data from different data sources. ______ is not supported as a data source yet. Score 2

 
 
 
 

14. In order to improve the processing efficiency when using MaxCompute, you can specify the partition when creating a table. That is, several fields in the table are specified as partition columns.

Which of the following descriptions aboutMaxCompute partition table are correct? (Number of correct answers: 4)

 
 
 
 
 

15. MaxCompute takes Project as a charged unit. The bill is charged according to three aspects: the usage of storage, computing resource, and data download respectively. You pay for compute and

storage resources by the day with no long-term commitments. Score 1

 
 

16. Machine Learning Platform for Artificial Intelligence (PAI) node is one of the node types in DataWorks business flow. It is used to call tasks created on PAI and schedule production activities based on the node configuration. PAI nodes can be added to DataWorks only _________. Score 2

 
 
 
 

17. DataService Studio in DataWorks aims to build a data service bus to help enterprises centrally manage private and public APIs. DataService Studio allows you to quickly create APIs based on data tables and register existing APIs with the DataService Studio platform for centralized management and release.

Which of the following descriptions about DataService Studio in DataWorks is INCORRECT? Score 2

 
 
 
 

18. Your company stores user profile records in an OLTP databases. You want to join the serecords with web server logs you have already ingested into the Hadoop file system.

What is the best way to obtain and ingest these user records?

 
 
 
 

19. A Log table named log in MaxCompute is a partition table, and the partition key is dt. Anew partition is created daily to store the new data of that day. Now we have one month’s data, starting from dt=’20180101′ to dt=’20180131′, and we may use ________ to delete the data on 20180101.

 
 
 
 

20. There are multiple connection clients for MaxCompute, which of the following is the easiest way to configure workflow and scheduling for MaxCompute tasks? Score 2

 
 
 
 

21. There are three types of node instances in an E-MapReducecluster: master, core, and _____ . Score 2

 
 
 
 

22. DataWorks can be used to create all types of tasks and configure scheduling cycles as needed. The supported granularity levels of scheduling cycles include days, weeks, months, hours, minutes and seconds. Score 1

 
 

23. When we use the MaxCompute tunnel command to upload the log.txt file to the t_log table, the t_log is a partition table and the partitioning column is (p1 string, p2 string).

Which of the following commands is correct?

 
 
 

24. In MaxCompute, you can use Tunnel command line for data upload and download.

Which of the following description of Tunnel command is NOT correct: Score 2

 
 
 
 

25. If a task node of DataWorks is deleted from the recycle bin, it can still be restored.

 
 

Question 1 of 25

ACP Cloud Computing ACP-Cloud1 Real Exam Questions
ACA Cloud Native Certification Exam ACA-CloudNative Real Questions (155 Q&As)

Leave a Reply

Your email address will not be published. Required fields are marked *