IT Managed Services Provider Resource Recommendation Update on May 29, 2021

Knowledge of terms to know

What is White Hat Hacker?

A white hat hacker is a computer security specialist who breaks into protected systems and networks to test and asses their security. White hat hackers use their skills to improve security by exposing vulnerabilities before malicious hackers (known as black hat hackers) can detect and exploit them. Although the methods used are similar, if not identical, to those employed by malicious hackers, white hat hackers have permission to employ them against the organization that has hired them.

White hat hackers are usually seen as hackers who use their skills to benefit society. They may be reformed black hat hackers or they may simply be well-versed in the methods and techniques used by hackers. An organization can hire these consultants to do tests and implement best practices that make them less vulnerable to malicious hacking attempts in the future.

For the most part, the term is synonymous with “ethical hacker.” The term comes from old Western movies where the cliché was for the “good guy” to wear a white cowboy hat. Of course, the “bad guys” always seemed to wear a black hat.

What is Data Retention Policy?

A data retention policy, or records retention policy, is an organization’s established protocol for retaining information for operational or regulatory compliance needs.

When writing a data retention policy, you must determine how to: organize information so it can be searched and accessed later, and dispose of information that’s no longer needed. Some organizations find it helpful to use a data retention policy template that provides a framework to follow when crafting the policy.

A comprehensive data retention policy outlines the business reasons for retaining specific data and what to do with it when targeted for disposal.

A data retention policy is part of an organization’s overall data management strategy. A policy is important because data can pile up dramatically, so it’s crucial to define how long an organization must hold on to specific data. An organization should only retain data for as long as it’s needed, whether that’s six months or six years. Retaining data longer than necessary takes up unnecessary storage space and costs more than needed.

“Creating a data retention policy is rarely a simple process and some organizations might find it better to outsource the policy creation and implementation process rather than doing it internally.” – Brien Posey

Related Terms: regulatory compliance, data management, object storage, records retention schedule, data archiving

What is Hybrid Workforce?

A hybrid workforce is a type of blended workforce comprising employees who work remotely and those who work from an office or central location. This way, employees can work from the places they want to, either in a central location, such as a warehouse, factory or retail location, or in a remote location, such as the home.

However, a hybrid workforce isn’t just about working from home or working from the office; rather, it’s about helping employees achieve a flexible work-life balance.

Hybrid workforces enable employees to work in a setting that’s most comfortable for them. If workers feel they are more productive in one location versus another, they can choose to work in that environment — or work in a combination of the two.

The hybrid workplace model has also put the health, safety and psychological needs of workers first by allowing for social distancing during the COVID-19 pandemic. A survey from Enterprise Technology Research (ETR) expected the number of employees working from home to double into 2021 and after the eventual post-pandemic landscape. Additionally, at their Directions 2021 virtual conference, IDC predicted employees will have a choice about including a hybrid workplace model — with about 33% of employees still having to work on site full time.

“Hybrid workplace models blend remote work with in-office work. Instead of structuring work around desks in a physical office space, hybrid work generally enables employees to structure work around their lives.” – Linda Rosencrance

Related Terms: human capital management (HCM), diversity, equity and inclusion (DEI), team collaboration tools, contingent workforce, workforce management

What is White Hat Hacker?

A white hat hacker is a computer security specialist who breaks into protected systems and networks to test and asses their security. White hat hackers use their skills to improve security by exposing vulnerabilities before malicious hackers (known as black hat hackers) can detect and exploit them. Although the methods used are similar, if not identical, to those employed by malicious hackers, white hat hackers have permission to employ them against the organization that has hired them.

White hat hackers are usually seen as hackers who use their skills to benefit society. They may be reformed black hat hackers or they may simply be well-versed in the methods and techniques used by hackers. An organization can hire these consultants to do tests and implement best practices that make them less vulnerable to malicious hacking attempts in the future.

For the most part, the term is synonymous with “ethical hacker.” The term comes from old Western movies where the cliché was for the “good guy” to wear a white cowboy hat. Of course, the “bad guys” always seemed to wear a black hat.

What can virtual machine use cases tell companies about systems?

There are many ways that companies can use virtual machine use cases to learn more about how virtualization components work in a virtual architecture. Use cases can identify how the virtual machine plays a role, as well as revealing more details about resource allocation, system requirements and much more.

Experts define a use case as a description of how a component works in a system. Use cases, when written for others, are often detailing the necessary steps and requirements for doing a particular task with a system component. In the case of virtual machines, the use case could be written for specific tasks such as migration, backup activities, or specific kinds of workload handling.

The use case will reveal the steps that need to be taken in order to have the virtual machine do a certain task effectively. Some use cases will be related to the idea of high availability – for instance, where a given virtual machine (or set of virtual machines) moves from one hosted location to another in order to deploy for high availability when a system is under pressure.

Some virtual machine use cases are written for fault tolerance or again, for the process of migration or changes to the system. Virtual machine use cases may be written relative to changing applications in the system, and reveal how the virtual machine or set of virtual machines support that application’s performance. As a modern example, the use cases around virtual machines are informing professionals about the logistics of using cloud systems for disaster recovery, with advances in DRaaS (Disaster Recovery as a Service) and other related ideas.

Virtual machine use cases also provide a road map for teams who are looking to implement and deploy virtual machines in specific ways. Looking at virtual machines in the context of VM vs. container setups, the use of open source environments like Kubernetes, etc. is part of assessing VM use cases for insights.

Another category of research for virtual machine use cases has to do with cybersecurity. Assessing modern VM use cases, security professionals are figuring out how to ward off specific kinds of new hacking and malware threats. For example, professionals look at built-in utilities that could be compromised, and examine various types of stateless attacks, in order to use new virtual machine setups to tighten up virtualization security. Whether these are related to extensions, executables, permissions or the creation of protections for stateless protocols, the laboratory style of these design processes shows insiders how to batten down the hatches against evolving cybersecurity threats.

What is Whaling?

Whaling is a specific kind of malicious hacking within the more general category of phishing, which involves hunting for data that can be used by the hacker. In general, phishing efforts are focused on collecting personal data about users. In whaling, the targets are high-ranking bankers, executives or others in powerful positions or job titles.

Hackers who engage in whaling often describe these efforts as “reeling in a big fish,” applying a familiar metaphor to the process of scouring technologies for loopholes and opportunities for data theft. Those who are engaged in whaling may, for example, hack into specific networks where these powerful individuals work or store sensitive data. They may also set up keylogging or other malware on a work station associated with one of these executives. There are many ways that hackers can pursue whaling, leading C-level or top-level executives in business and government to stay vigilant about the possibility of cyber threats.

What is Hacktivism?

Hacktivism is the act of misusing a computer system or network for a socially or politically motivated reason. Individuals who perform hacktivism are known as hacktivists.

Hacktivism is meant to call the public’s attention to something the hacktivist believes is an important issue or cause, such as freedom of information, human rights or a religious point of view. Hacktivists express their support of a social cause or opposition to an organization by displaying messages or images on the website of the organization they believe is doing something wrong or whose message or activities they oppose.

Hacktivists are typically individuals, but there are hacktivist groups as well that operate in coordinated efforts. Anonymous and Lulz Security, also known as LulzSec, are examples. Most hacktivists work anonymously.

Hacktivists usually have altruistic or ideological motives, such as social justice or free speech. Their goal is to disrupt services and bring attention to a political or social cause. For example, hacktivists might leave a visible message on the homepage of a website that gets a lot of traffic or embodies a point of view that the individual or group opposes. Hacktivists often use denial-of-service or distributed DoS (DDoS) attacks where they overwhelm a website and disrupt traffic.

Hacktivists want others to notice their work to inspire action or change. They often focus on social change but also target government, business and other groups that they don’t agree with for their attacks. Sending a message and eliciting change trump profit motives for hacktivists.

“Like most weapons, hacking can be used for good or bad, to defend freedom or attack it.” – Dai Davis

Related Terms: distributed denial of service attack, Anonymous, whistleblower, doxing, WikiLeaks

What is Phishing?

Phishing is the fraudulent act of acquiring private and sensitive information, such as credit card numbers, personal identification and account usernames and passwords. Using a complex set of social engineering techniques and computer programming expertise, phishing websites lure email recipients and Web users into believing that a spoofed website is legitimate and genuine. In actuality, the phishing victim later discovers his personal identity and other vital information have been stolen and exposed.

Similar to fishing in a lake or river, phishing is computer lingo for fishing over the Internet for personal information. The term was first used in 1996, when the first phishing act was recorded.

Phishing uses link manipulation, image filter evasion and website forgery to fool Web users into thinking that a spoofed website is genuine and legitimate. Once the user enters vital information, he immediately becomes a phishing victim.

Fortunately, phishing victimization is preventable. The following security precautions are recommended:

  • Use updated computer security tools, such as anti-virus software, spyware and firewall.
  • Never open unknown or suspicious email attachments.
  • Never divulge personal information requested by email, such as your name or credit card number.
  • Double check the website URL for legitimacy by typing the actual address in your Web browser.
  • Verify the website’s phone number before placing any calls to the phone number provided via email.

What is Pen Testing (Penetration Testing)?

A penetration test, also called a pen test or ethical hacking, is a cybersecurity technique organizations use to identify, test and highlight vulnerabilities in their security posture. These penetration tests are often carried out by ethical hackers. These in-house employees or third parties mimic the strategies and actions of an attacker in order to evaluate the hackability of an organization’s computer systems, network or web applications. Organizations can also use pen testing to test their adherence to compliance regulations.

Ethical hackers are information technology (IT) experts who use hacking methods to help companies identify possible entry points into their infrastructure. By using different methodologies, tools and approaches, companies can perform simulated cyber attacks to test the strengths and weaknesses of their existing security systems. Penetration, in this case, refers to the degree to which a hypothetical threat actor, or hacker, can penetrate an organization’s cybersecurity measures and protocols.

There are three main pen testing strategies, each offering pen testers a certain level of information they need to carry out their attack. For example, white box testing provides the tester all of the details about an organization’s system or target network; black box testing provides the tester no knowledge of the system; and gray box penetration testing provides the tester partial knowledge of the system.

Pen testing is considered a proactive cybersecurity measure because it involves consistent, self-initiated improvements based on the reports generated by the test. This differs from nonproactive approaches, which lack the foresight to improve upon weaknesses as they arise. A nonproactive approach to cybersecurity, for example, would involve a company updating its firewall after a data breach occurs. The goal of proactive measures, like pen testing, is to minimize the number of retroactive upgrades and maximize an organization’s security.

“Pen testing is a necessary part of any competent network and cybersecurity strategy.” – John Cavanaugh

Related Terms: security posture, ethical hacker, data breach, vulnerability assessment, phishing

What is Polkadot?

Polkadot is a sharded blockchain protocol that lets multiple blockchains communicate and work together efficiently, allowing them to split heavy workloads and prevent bottlenecks. At its core, Polkadot protocol is a translation architecture that allows users to combine, decentralize, and scale blockchains as needed. Since networks comprised of a single blockchain are limited in the number of transactions they can process in a set period of time, they were impossible to implement into real-world applications without the use of Polkadot.

Polkadot works by connecting multiple blockchains into a single network called a Relay Chain, forming a web of single-unit chains that can be used to run specific tasks without running out of computational power. Connected, blockchains do not work separately, but as a larger, more complex unit. This allows a single Polkadot relay chain to process multiple transactions simultaneously. And instead of off-network communication between individual blockchains, the Polkadot relay chain is closed and secure, isolating the data exchanged between individual chains from unauthorized access and manipulation.

Blockchains—regardless of their type or configuration—in a single relay chain are able to communicate and transfer and exchange any type of data with one another. Anything from preexisting files such as cryptocurrency and data for analysis to extracting data from real-time events such as the stock market values.

Additionally, Polkadot is decentralized and entirely transparent when it comes to management. All individuals participating in a Polkadot relay chain have a say in how the network is managed and their decisions are made public to all other members. That makes Polkadot particularly useful when it comes to managing blockchains that carry sensitive data or play a significant role in high-stake decision-making.

Polkadot protocol is what allowed blockchain technology to move from small-scale projects to larger, real-world applications. Previously, each blockchain worked independently. If the demand for one increases to where a single chain could no longer handle the influx of transaction requests, there would be a correlative increase in wait-time and transaction fees.

But with Polkadot’s decentralized model, blockchains can be utilized by various people to suit fluctuating needs, from software developers to established enterprises. Additionally, Polkadot is highly flexible and scalable as it supports forkless, on-chain upgrades, allowing teams and individuals to quickly and efficiently scale their blockchain and add new features.

It is important to note that Polkadot is not independent like individual blockchains. Instead, it is simply the network protocol that heterogeneously connects them. It cannot exist on its own. To function properly, Polkadot protocol relies on three elements: connectors, consensus roles, and governance roles.

  • Connectors: The connectors are the components that form a Polkadot network and are what holds and transfers data through the network. They are the Relay Chain, Parachains, Parathreads, and Bridges.
  • Consensus Roles: Consensus roles are the foundation protocol of Polkadot that enables it to connect multiple blockchain networks and cryptocurrencies together. Consensus roles consist of Nominators, Validators, Collators, and Fishermen.
  • Governance Roles: As the name suggests, this is how the stakeholders, or owners, of one or more Polkadot relay chains, govern and control it. Since Polkadot is decentralized and everyone has a voice, the governance roles consist of elected Council Members and a Technical Committee.

Perhaps, the biggest of Polkadot’s contribution to blockchain technology is ensuring the future of Decentralized Finance (DeFi). That is because Polkadot is the first of its kind to offer a completely decentralized network, all whilst allowing for cross-chain communication and strong data encryption, all of which are essential when it comes to finances in general and DeFi goals in particular.

In fact, there are multiple DeFi projects that used Polkadot protocol as their foundation such as Acala, Moonbeam, and Centrifuge. While its beginning stands strong in the world of online finances, Polkadot is expected to be the catalyst to entirely decentralized internet and applications as a whole, where the resources are controlled by users instead of a select few.

What is Cryptomining?

Cryptomining is the process of validating cryptocurrency transactions. The foundation of cryptocurrencies is distributed public ledgers that record all financial transactions. The records are saved in the form of blockchains. Each transaction is linked to the subsequent transaction creating a chain of records. The records are linked using cryptographic hashes.

Because the ledger is public, a record needs to be validated before being added to the ledger. Otherwise, it would be too easy to forge fraudulent payments. Cryptocurrencies use Proof-of-Work (PoW) as a security measure.

In order to post a transaction to the ledger, a problem that is difficult to solve, but easy to verify must be computed. The problems are computationally complex and require brute force to solve. A network of computers will compete to solve the problem first. This process is called cryptomining.

The computer that solves the problem first earns the right to post the transaction to the ledger. The goal is to make the cost of solving the complex problem higher than the gain of posting a fraudulent transaction. The benefit to the cryptominer is that for every transaction posted, the winner receives a small reward. The reward is often a combination of a fee associated with the transaction and newly created cryptocurrency.

What is Dogecoin?

Dogecoin is a dog-themed cryptocurrency pioneered in 2013, an alternative to more famous choices like bitcoin. Although the value of an individual Dogecoin is very small (often a portion of a cent) the massive number of Dogecoins in circulation correlates to a market capitalization of over $1 billion.

The Dogecoin currency is based on an internet meme featuring a picture of a Shiba Inu, a popular Japanese dog breed. The currency’s faceplate features the Shiba Inu’s head with the letter “D” superimposed.

Like some other cryptocurrencies, the Dogecoin has seen large changes in value and prolific mining. Unlike some of the other cryptocurrencies that get more attention in national media, Dogecoin was created as a “fun” and less controversial type of digital money. Part of its popularity is based on its innocuous origins, since users and miners do not have to deal with the continuous forking and community controversy that has been associated with bitcoin over the years.

What is Advanced Analytics?

Advanced analytics is a data analysis methodology that uses predictive modeling, machine learning algorithms, deep learning, business process automation and other statistical methods to analyze business information from a variety of data sources.

Advanced analytics uses data science beyond traditional business intelligence (BI) methods to predict patterns and estimate the likelihood of future events. This in turn can help an organization be more responsive and significantly increase its accuracy in decision-making.

Often used by data scientists, advanced analytics tools both combine and extend prescriptive analytics and predictive analytics while adding various options for enhanced visualization and predictive models.

Advanced analytics is a valuable resource to enterprises because it enables an organization to get greater functionality from its data assets, regardless of where the data is stored or what format it’s in. Advanced analytics also can help address some of the more complex business problems that traditional BI reporting cannot.

For example, to create a contextual marketing engine, a consumer packaged goods manufacturer might need to ask the following questions:

  • When is a customer likely to exhaust their supply of an item?
  • What time of the day or week are they most receptive to marketing advertisements?
  • What level of profitability is achievable when marketing at that time?
  • What price point are they most likely to purchase at?

By combining consumption models with historical data and artificial intelligence (AI), advanced analytics can help an organization determine precise answers to those questions.

“Companies are using advanced analytics to optimize everything from supply chains to drug research to data center operations.” – Maria Korolov

Related Terms: predictive modeling, machine learning, deep learning, business process automation, data science

What is Password Salting?

Password salting is a form of password encryption that involves appending a password to a given username and then hashing the new string of characters. This is usually done via an MD5 hashing algorithm. Password-salting is most commonly found within Linux operating systems, and it is generally considered a more secure password encryption model than any of the models used within the various Microsoft distributions.

When a username has been established, the user typically creates a password to associate with this username. After the user has submitted the password to the salt-enabled system, the system appends the password to the username. Then, the new string of characters is hashed. This is a very effective way of encrypting passwords because even if two different users coincidentally select the same password, their usernames will almost certainly be different, thereby resulting in a different hash value.

What is Password Sniffer?

A password sniffer is a software application that scans and records passwords that are used or broadcasted on a computer or network interface. It listens to all incoming and outgoing network traffic and records any instance of a data packet that contains a password.

A password sniffer installs on a host machine and scans all incoming and outgoing network traffic. A password sniffer may be applied to most network protocols, including HTTP, Internet Message Access Protocol (IMAP), file transfer protocol (FTP), POP3, Telnet (TN) and related protocols that carry passwords in some format. In addition, a password sniffer that is installed on a gateway or proxy server can listen and retrieve all passwords that flow within a network.

A password sniffer is primarily used as a network security tool for storing and restoring passwords. However, hackers and crackers use such utilities to sniff out passwords for illegal and malicious purposes.

What is CISO (chief information security officer)?

The CISO (chief information security officer) is a senior-level executive responsible for developing and implementing an information security program, which includes procedures and policies designed to protect enterprise communications, systems and assets from both internal and external threats. The CISO may also work alongside the chief information officer to procure cybersecurity products and services and to manage disaster recovery and business continuity plans.

The chief information security officer may also be referred to as the chief security architect, the security manager, the corporate security officer or the information security manager, depending on the company’s structure and existing titles. While the CISO is also responsible for the overall corporate security of the company, which includes its employees and facilities, he or she may simply be called the chief security officer (CSO).

Instead of waiting for a data breach or security incident, the CISO is tasked with anticipating new threats and actively working to prevent them from occurring. The CISO must work with other executives across different departments to ensure that security systems are working smoothly to reduce the organization’s operational risks in the face of a security attack.

The chief information security officer’s duties may include conducting employee security awareness training, developing secure business and communication practices, identifying security objectives and metrics, choosing and purchasing security products from vendors, ensuring that the company is in regulatory compliance with the rules for relevant bodies, and enforcing adherence to security practices.

Other duties and responsibilities CISOs perform include ensuring the company’s data privacy is secure, managing the Computer Security Incident Response Team and conducting electronic discovery and digital forensic investigations.

“CISOs are not the same from company to company and industry to industry. We’re still in the infancy of what this role really is and how it fits into the strategic focus of a business.” – Steve Tcherchian

Related Terms: business continuity action plan, data breach, security awareness training, risk management, security audit

What is Multi-Factor Authentication (MFA)?

Multi-factor authentication (MFA) is a security mechanism in which individuals are authenticated through more than one required security and validation procedure. MFA is built from a combination of physical, logical and biometric validation techniques used to secure a facility, product or service.

MFA is implemented in an environment where an individual’s authentication and validation is the highest priority. Examples include a nuclear power plant or a bank’s data warehouse.

To gain access to a secured location or system, MFA typically requires three different security mechanism layers and formats, as follows:

  • Physical security: Validates and authenticates a user based on an employee card or other type of physical token
  • Logical/knowledge base security: Validates and authenticates a user based on a required password or personal identification number (PIN), which is memorized by the user
  • Biometric security: Validates and authenticates based on a user’s fingerprints, retinal scan and/or voice

What is Computer Forensics (Cyber Forensics)?

Computer forensics is the application of investigation and analysis techniques to gather and preserve evidence from a particular computing device in a way that is suitable for presentation in a court of law. The goal of computer forensics is to perform a structured investigation and maintain a documented chain of evidence to find out exactly what happened on a computing device and who was responsible for it.

Computer forensics — which is sometimes referred to as computer forensic science — essentially is data recovery with legal compliance guidelines to make the information admissible in legal proceedings. The terms digital forensics and cyber forensics are often used as synonyms for computer forensics.

Digital forensics starts with the collection of information in a way that maintains its integrity. Investigators then analyze the data or system to determine if it was changed, how it was changed and who made the changes. The use of computer forensics isn’t always tied to a crime. The forensic process is also used as part of data recovery processes to gather data from a crashed server, failed drive, reformatted operating system (OS) or other situation where a system has unexpectedly stopped working.

“While the work of all information security professionals is important, those working in the field of cybersecurity forensics play an especially pivotal role in the attribution of cyberattacks and the apprehension of perpetrators.” – Ed Tittel

Related Terms: Trojan horse, intrusion detection system, steganography, forensic image, cybercrime

What is Multifactor authentication (MFA)?

Multifactor authentication (MFA) is a security technology that requires multiple methods of authentication from independent categories of credentials to verify a user’s identity for a login or other transaction. Multifactor authentication combines two or more independent credentials: what the user knows, such as a password; what the user has, such as a security token; and what the user is, by using biometric verification methods.

The goal of MFA is to create a layered defense that makes it more difficult for an unauthorized person to access a target, such as a physical location, computing device, network or database. If one factor is compromised or broken, the attacker still has at least one or more barriers to breach before successfully breaking into the target.

In the past, MFA systems typically relied on two-factor authentication (2FA). Increasingly, vendors are using the label multifactor to describe any authentication scheme that requires two or more identity credentials to decrease the possibility of a cyber attack. Multifactor authentication is a core component of an identity and access management framework.

“In a world where credential harvesting attacks are on the rise, better authentication has moved from a nice-to-have to an absolutely essential technology.” – David Strom

Related Terms: two-factor authentication, identity access management, authentication factor, knowledge-based authentication, biometric verification

What is Secure File Transfer Protocol (SFTP)?

Secure File Transfer Protocol (SFTP) is a file protocol for transferring large files over the web. It builds on the File Transfer Protocol (FTP) and includes Secure Shell (SSH) security components.

Secure Shell is a cryptographic component of internet security. SSH and SFTP were designed by the Internet Engineering Task Force (IETF) for greater web security. SFTP transfers files security using SSH and encrypted FTP commands to avoid password sniffing and exposing sensitive information in plain text. Since the client needs to be authenticated by the server, SFTP also protects against man-in-the-middle attacks.

SFTP can be handy in all situations where sensitive data needs to be protected. For example, trade secrets may not be covered by any particular data privacy rule, but it can be devastating for them to fall into the wrong hands. So a business user might use SFTP to transmit files containing trade secrets or other similar information. A private user may want to encrypt his or her communications as well.

This term is also known as Secure Shell (SSH) File Transfer Protocol.

What is Password Protection?

Password protection is a security process that protects information accessible via computers that needs to be protected from certain users. Password protection allows only those with an authorized password to gain access to certain information.

Passwords are used commonly to gain entry to networks and into various Internet accounts in order to authenticate the user accessing the website.

Password protection policies should be in place at organizations so that personnel know how to create a password, how to store their password and how often to change it.

What is One-Time Password (OTP)?

A one-time password (OTP) is type of password that is valid for only one use.

It is a secure way to provide access to an application or perform a transaction only one time. The password becomes invalid after it has been used and cannot be used again.

A OTP is a security technique that provides protection against various password-based attacks, specifically password sniffing and replay attacks.

It provides more enhanced protection than static passwords, which remain the same for multiple login sessions. OTP works through randomness algorithms that generate a new and random password each time they are used.

The algorithm always uses random characters and symbols to create a password so that a hacker/cracker cannot guess the future password. A OTP uses several techniques to create a password, including:

  • Time-Synchronization: The password is valid for only a short period of time.
  • Mathematical Algorithm: The password is generated using random numbers processed within an algorithm.

What is Hashed Table?

A hashed table or hash table is a special type of internal table used in ABAP programs, where by using the hash functionality, the necessary table record is obtained. Like other types of internal tables, hashed tables are also used to extract data from standard SAP database tables by means of ABAP programs or ABAP objects. However, unlike other types of internal tables like standard or sorted, hash tables cannot be accessed using an index. As with database tables, hashed tables also require a unique key.

The features of a hashed internal table include: To declare an internal table a hashed table, the declaration of the internal table should contain the keywords ‘TYPE HASHED TABLE’. This would make the internal table accessible to the internal HASH algorithm. The unique key must be declared when a HASH table is to be used as it is mandatory in the HASH algorithm. The unique key is defined by the keyword ‘UNIQUE KEY’. A hash table allows the table read to have costs independent of table size. Hashed tables are preferred over other types of internal tables when there are large data sets with lots of reads and a negligible number of writes. Hashed tables are also ideal for processing large amounts of data. Regardless of the number of table entries present, the response time for key access in a hashed table remains constant. Hashed tables work comparatively faster only for full table keys and cannot work for ranges .

What is Rainbow Table Attack?

A rainbow table attack is a type of hacking wherein the perpetrator tries to use a rainbow hash table to crack the passwords stored in a database system. A rainbow table is a hash function used in cryptography for storing important data such as passwords in a database. Sensitive data are hashed twice (or more times) with the same or with different keys in order to avoid rainbow table attacks.

A password database usually generates a key for a rainbow table and encrypts a password before storing it. When a user enters a password for the nth time, the password is again encrypted with the same key string and then matched with the stored value. A rainbow table is a precomputed lookup table used in cryptography for storing password hashes. It is used for recovering a password based on its hash value.

What is DevOps as a Service?

DevOps as a Service is a delivery model for a set of tools that facilitates collaboration between an organization’s software development team and the operations team. In this delivery model, the DevOps as a Service provider provides the disparate tools that cover various aspects of the overall process and connects these tools to work together as one unit. DevOps as a Service is the opposite of an in-house best-of-breed toolchain approach, in which the DevOps team uses a disconnected collection of discrete tools.

The aim of DevOps as a Service is to ensure that every action carried out in the software delivery process can be tracked. The DevOps as a Service system helps to ensure that the organization achieves desired outcomes and successfully follows strategies such as continuous delivery (CD) and continuous integration (CI) to deliver business value. DevOps as a Service also provides feedback to the developer group when a problem is identified in the production environment.

“By integrating chosen elements of DevOps tooling into a single overarching system, DevOps as a Service aims to improve collaboration, monitoring, management and reporting..” – Clive Longbottom

Related Terms: continuous delivery, continuous integration, open API, DevOps certification, configuration management

What is Secure Hash Algorithm (SHA)?

A secure hash algorithm is actually a set of algorithms developed by the National Institutes of Standards and Technology (NIST) and other government and private parties. These secure encryption or “file check” functions have arisen to meet some of the top cybersecurity challenges of the 21st century, as a number of public service groups work with federal government agencies to provide better online security standards for organizations and the public.

Within the family of secure hash algorithms, there are several instances of these tools that were set up to facilitate better digital security. The first one, SHA-0, was developed in 1993. Like its successor, SHA-1, SHA-0 features 16-bit hashing.

The next secure hash algorithm, SHA-2, involves a set of two functions with 256-bit and 512-bit technologies, respectively. There is also a top-level secure hash algorithm known as SHA-3 or “Keccak” that developed from a crowd sourcing contest to see who could design another new algorithm for cybersecurity.

All of these secure hash algorithms are part of new encryption standards to keep sensitive data safe and prevent different types of attacks. Although some of these were developed by agencies like the National Security Agency, and some by independent developers, all of them are related to the general functions of hash encryption that shields data in certain database and network scenarios, helping to evolve cybersecurity in the digital age.

What is Hashing?

Hashing is the process of translating a given key into a code. A hash function is used to substitute the information with a newly generated hash code. More specifically, hashing is the practice of taking a string or input key, a variable created for storing narrative data, and representing it with a hash value, which is typically determined by an algorithm and constitutes a much shorter string than the original.

The hash table will create a list where all value pairs are stored and easily accessed through its index. The result is a technique for accessing key values in a database table in a very efficient manner as well as a method to improve the security of a database through encryption.

Hashing makes use of algorithms that transform blocks of data from a file in a much shorter value or key of a fixed length that represent those strings. The resulting hash value is a sort of concentrated summary of every string within a given file, and should be able to change even when a single byte of data in that file is changed (avalanche effect). This provides massive benefits in hashing in terms of data compression. While hashing is not compression, it can operate very much like file compression in that it takes a larger data set and shrinks it into a more manageable form.

Suppose you had “John’s wallet ID” written 4000 times throughout a database. By taking all of those repetitive strings and hashing them into a shorter string, you’re saving tons of memory space.

What is Machine Learning Engineer (ML engineer)?

A machine learning engineer (ML engineer) is a person in IT who focuses on researching, building and designing self-running artificial intelligence (AI) systems to automate predictive models. Machine learning engineers design and create the AI algorithms capable of learning and making predictions that define machine learning (ML).

An ML engineer typically works as part of a larger data science team and will communicate with data scientists, administrators, data analysts, data engineers and data architects. They may also communicate with people outside of their teams, such as with IT, software development, and sales or web development teams, depending on the organization’s size.

ML engineers act as a bridge between data scientists who focus on statistical and model-building work and the construction of machine learning and AI systems.

The machine learning engineer role needs to assess, analyze and organize large amounts of data, while also executing tests and optimizing machine learning models and algorithms.

“Machine learning engineers need to have a firm grasp on the different tools available for the machine learning model lifecycle and keep up to date with the changes in the AI vendor landscape.” – Kathleen Walch

Related Terms: data engineer, predictive modeling, machine learning algorithm, data scientist, unsupervised learning

What is Greenwashing?

Greenwashing refers to a marketing makeover in which a product is presented as more environment friendly when no substantial effort has been taken to make it so. In a more extreme sense greenwashing may refer to an attempt to make a product that is environmentally damaging appear to be environmentally friendly. Greenwashing plays upon a renewed consumer interest in protecting the environment.

There are two degrees of greenwashing. In the weak form, it merely involves a company claiming credit for existing production methods as if they were influenced by an eco-friendly mandate. For example, a software company may eliminate shrink wrap on packaging to save costs and then spin the move as a green initiative. In the more extreme form, a company will outright lie about the eco-friendliness of a product by using vague phrasing (“best in class ecology”), suggesting packaging (green fields, flowers, etc.), questionable endorsements (“green certified by ecomaniacs”) and so on.

Free Tool

Everything Toolbar is the easy-access interface you’ve been craving for Everything that enables you to quickly search for files, folders and more right from the Windows taskbar.

NK2Edit is a simple tool that allows you to selectively edit nk2 files to either delete or modify the email addresses and contact details that are automatically saved by MS Outlook when you compose a message.

Fast Software Audit offers you a quick, easy way to gather details on the installed software and Windows product keys/IDs from remote computers. Enter the computer name you want to scan, or specify multiple computers by importing a list of names from a CSV file. Results can be viewed on screen or exported to CSV for use elsewhere.

Samplicator is a simple tool for receiving UDP datagrams on a given port and resending them to a specified set of receivers for occasions when you need to export NetFlow traffic to more than one NetFlow collector. Can also be configured to individually specify a sampling divisor N for each receiver that will only receive one in N of the received packets.

RackTables helps document hardware assets, network addresses, space in racks, network configuration and more for datacenter and server room asset management. Allows you to compile a list of all devices, racks and enclosures; mount the devices into the racks; maintain physical ports of devices and links between them; manage IP addresses, assign them to devices and group into networks; document NAT rules; describe loadbalancing policy and store configuration; attach files to various objects in the system; create users, assign permissions and allow/deny their actions; and label everything and everyone with a flexible tagging system.

Logstash is a server-side data processing pipeline that dynamically ingests data from logs, metrics, web applications, data stores and assorted AWS services, and then transforms and ships it to your favorite “stash” in a continuous, streaming fashion. Regardless of format or complexity—Logstash filters parse each event as data travels from source to store, identify named fields to build a structure, and transform them into a common format to better facilitate analysis.

Homebrew is known as “The Missing Package Manager for macOS (or Linux).” It’s designed to easily install all the useful items your original OS installer didn’t bother to include.

Micro is a highly customizable, intuitive terminal-based text editor that’s easy to install. Supports over 75 languages; 16, 256 and truecolor themes; and Sublime-style multiple cursors. “It is very similar to Nano. It is a single-file, stand-alone executable that has mouse support, macro record/playback and syntax highlighting. It also has a Windows binary available for download (as well as Linux and MacOS).”

Alacritty is a modern terminal emulator with both a nice set of defaults and the option for extensive configuration. It integrates with other applications to offer a flexible set of features with high performance. Supports BSD, Linux, macOS and Windows. While currently in beta—i.e., there are still a few missing features and bugs to be fixed—it is appreciated by many for daily use.

pmacct includes network monitoring tools that account, classify, aggregate, replicate and export IPv4 and IPv6 traffic; collect and correlate control-plane data via BGP and BMP; collect and correlate RPKI data; and collect infrastructure data via streaming telemetry. Each component works both as a standalone daemon and as a thread of execution for correlation purposes (to enhance NetFlow with BGP data).


20 CIS Controls & Resources offers detailed explanations of key controls you’ll want to address in your security planning.

Red Team Blues: A 10 step security program for Windows Active Directory environments provides a nice set of steps you can take to make it dramatically more difficult for attackers to create an opening that allows them to move inside your Active Directory environment.

Free eBook

Office 365/Microsoft 365 – The Essential Companion Guide covers everything from basic descriptions to installation, migration, use-cases and best practices for all features within the Office/Microsoft 365 suite. This 100+ page second-edition eBook, written for Altaro by Microsoft Certified Trainer Paul Schnackenburg, is the perfect desktop reference guide for current and aspiring Office/Microsoft 365 admins.


MITRE ATT&CK Navigator is a simple, open-source web app that provides basic navigation and annotation of the ATT&CK for Enterprise, ATT&CK for Mobile and PRE-ATT&CK matrices. It allows you to manipulate the cells in the matrix by color coding, adding a comment, assigning a numerical value and more.

Training Resource

Vscode Vim Academy is a game to help you learn and practice vim and vscode keys in an enjoyable way. Covers 2-5 vim keys per level, with level text and keys randomly generated per level. You race to complete 10 sets of tasks with as few keystrokes as possible.

A Practical Guide to (Correctly) Troubleshooting with Traceroute is a rather lengthy slide deck from Richard Steenbergen’s presentation on how to make the best use of the traceroute tool in troubleshooting network connections. Walks you through the hows, whys and how tos of this highly useful tool.

Documentation Resource

A Proper Server Naming Scheme is a terrific blog post that explains a well-thought-out approach to hardware naming for small- to medium-sized businesses. These best practices are designed to help you avoid common problems as the list of devices grows and changes over time.


We all hate accidentally sending unfinished emails, especially on sensitive topics, but it happens nonetheless. To eradicate the risk from your life, hasthisusernamegone suggests, “[D]on’t compose it in your email client at all. All my ‘this is official, don’t get this wrong’ emails are composed in a basic text editor (often Notepad), then copied and pasted over to Outlook when I’m happy with them. Then it gets another proof-read and a chance for the spell-check to do it’s thing and only then does it get sent. That way I can’t accidentally send a half-finished email to the board or whoever.”

To find out the minute anyone starts impersonating your organization on the web, you can “create a canary token and hide it on your web page so you get a notification any time someone clones your site.” This early warning enables you to file a complaint with the registrar and get the takedown process started as soon as the site goes live.

To make it easier to clean up your AD account list, we suggests, “for users which are contractors or test accounts, I assign an expiration date. You can’t do this (yet) with AAD; but with AD, it is useful. When it comes time to check contractors, I update their expiration dates, usually once a quarter. This gives a definite backstop to catch those accounts which normally would fall between the cracks.”


ptrap is a script that can help in situations when you need to look at packets your network sends out too fast to catch as an open session. Enables you see which process on your system is sending packets to a single <ip>:<port>. Supports TCP and UDP packet monitoring and the execution of a custom program in response.