Skip to Content

Common Technical Interview Questions and Answers Update on May 29, 2021

Question 41: Predictive models often produce conflicting results, Rathburn said. What did he recommend doing when that happens?

A. Scrapping the models and starting over by developing new ones
B. Implementing a process to identify and reduce false positives in the models
C. Basing business decisions on one set of findings to test a model’s effectiveness

Correct Answer:
B. Implementing a process to identify and reduce false positives in the models

Question 42: Which of the following answers best reflects Rathburn’s view of the potential impact of adding more data for analytical uses — for example, in “big data” analytics environments?

A. Better data ‘resolution’ due to the use of more data fields and dimensions
B. Increased analytical precision from having more records to base findings on
C. Reduced analytical accuracy because of greater data quality problems
D. All of the above

Correct Answer:
D. All of the above

Question 43: How many data records does Rathburn recommend be used in developing predictive models?

A. 2,500 to 5,000
B. 5,000 to 10,000
C. 10,000 to 20,000
D. As many as possible

Correct Answer:
A. 2,500 to 5,000

Question 44: Which of the following did Rathburn NOT list as an essential element of social media analytics?

A. The volume of social media discussions about your company
B. How the online conversation compares to what’s being said about your rivals
C. The number of ‘like’ ratings for posts on your corporate Facebook page
D. The level of engagement by customers in social media conversations

Correct Answer:
C. The number of ‘like’ ratings for posts on your corporate Facebook page

Question 45: Censys was created at the University of Michigan by the team of researchers who also developed what wide-scale internet-scanning tool?

A. Nmap
B. Zmap
C. Nikto
D. Dirbuster

Correct Answer:
B. Zmap
Explanation:
The developers of Censys are also responsible for the development of Zmap, a wide-scale internet port scanner. Nmap was originally written by Gordon Lyon and is now found at its github repository, where public users can submit code and contribute to its further development. Nikto was developed by Chris Sullo and David Lodge; more information may be found at the developers’ website, and the tool itself may be found at its github repository. Dirbuster was originally developed as part of the OWASP Dirbuster project, which is now inactive. Fortunately, the functionality of Dirbuster has been absorbed by the OWASP ZAP (Zed Attack Proxy) team, which has functionally forked Dirbuster into an extension for the ZAP project. Because these tools were all developed by a different team from the one responsible for Censys, these answers are incorrect.

Question 46: Domain registration information returned on a Whois search does not include which of the following?

A. Domain administrator e-mail
B. Domain administrator fax
C. Domain administrator organization
D. Domain administrator GPS coordinates

Correct Answer:
D. Domain administrator GPS coordinates
Explanation:
Although Whois domain registration information can be quite detailed, the most one can expect to find concerning geographic location is a physical address. GPS coordinates are not found in a Whois query, making this the correct answer. Additionally, note that this information may all ultimately be protected by a Whois guard service; for numerous reasons, web administrators may have issues with broadcasting their names, email addresses and home addresses across the internet. To account for this, domain registrars will often front their own information in Whois information for a domain, with a simple email address to contact in the case of abuse or misuse of a domain they have registered on behalf of a client. This allows action to be taken if a site with privatized Whois data is serving malware, engaged in copyright infringement or other situations where there is a legal or ethical duty to shut down a site or require its alteration.

Question 47: Open-source intelligence (OSINT) collection frameworks are used to effectively manage sources of collected information. Which of the following best describes open-source intelligence?

A. Company documentation labeled “Confidential” on an internal company storage share requiring authentication
B. Press release drafts found on an undocumented web page inside a company’s intranet
C. Any information or data obtained via publicly available sources that is used to aid or drive decision-making processes
D. Information gained by source code analysis of free and open-source software (FOSS)

Correct Answer:
C. Any information or data obtained via publicly available sources that is used to aid or drive decision-making processes
Explanation:
Open-source intelligence is any information or data obtained via publicly available sources that is used to aid or drive decision-making processes.

The first two options are incorrect because documentation labeled “Confidential” on network shared storage requiring authentication and websites locked behind a company intranet are clearly meant to share knowledge with individuals within the organization. As such, they are examples of information that would not be discoverable via open-source collection methods. The last option is incorrect because the use of the term “open source” in this case is a red herring, referring to its relevance to software rather than information gathering. Be wary for such misleading answers during the exam.

Question 48: Which method of collecting open-source intelligence consists of the collection of published documents, such as Microsoft Office or PDF files, and parsing the information hidden within to reveal usernames, e-mail addresses, or other sensitive data?

A. Metadata analysis
B. File scraping
C. File mining
D. File excavation

Correct Answer:
A. Metadata analysis
Explanation:
Metadata analysis is the term for collecting open-source intelligence by parsing published documents for information hidden within to reveal usernames, e-mail addresses, or other sensitive data.

File scraping, file mining, and file excavation are all meaningless phrases meant to sound like information security terminology, without having a specific meaning within that context. Be wary of answers in this vein during the exam.

Question 49: Which of the following search engines is not used by FOCA when searching for documents?

A. Bing
B. Google
C. Yahoo
D. DuckDuckGo

Correct Answer:
C. Yahoo
Explanation:
Yahoo is not used by FOCA when it searches for documents, making this the correct answer.

Bing, Google, and DuckDuckGo are all used by FOCA when it searches for documents.

Question 50: What is the process by which large data sets are analyzed to reveal patterns or hidden anomalies?

A. Passive information gathering
B. Footprinting
C. Active information gathering
D. Data mining

Correct Answer:
D. Data mining
Explanation:
Data mining is the process by which large data sets are analyzed to reveal patterns or hidden anomalies.

Passive and active information gathering are incorrect because they are methods of intelligence collection, not analysis. The second option is incorrect because footprinting is the process of conducting reconnaissance against computers and information systems during a penetration test with the aim of finding the most efficient methods of attack that will meet the goals of the assessment.

    Ads Blocker Image Powered by Code Help Pro

    Your Support Matters...

    We run an independent site that\'s committed to delivering valuable content, but it comes with its challenges. Many of our readers use ad blockers, causing our advertising revenue to decline. Unlike some websites, we haven\'t implemented paywalls to restrict access. Your support can make a significant difference. If you find this website useful and choose to support us, it would greatly secure our future. We appreciate your help. If you\'re currently using an ad blocker, please consider disabling it for our site. Thank you for your understanding and support.