IT Managed Services Provider Resource Recommendation Update on December 19, 2020

Knowledge of terms to know

What is Social Recruiting (Social Media Recruitment)?

Social media recruiting is the enterprise use of social media platforms to identify, engage and vet individuals who the organization may want to hire.

This HR practice, which is sometimes called social hiring or simply social recruiting, uses social media sites and other internet-based options, such as blogs, to reach potential job candidates.

The growing ubiquity of social media is prompting HR departments to develop fully formed social media recruiting strategies and include them as a formal part of their organizations’ overall human capital management strategies.

Employers can use social media recruiting to reach a wider pool of candidates than is possible with traditional recruiting efforts and at the same time target individuals more effectively. An employer’s talent acquisition strategy and the number and types of platforms it uses in its recruitment efforts will affect how broad or targeted the mix of potential candidates is. Many social media sites enable HR professionals to easily tailor messages to each candidate group.

To pursue social media recruiting, organizations create a presence on social media sites as a way to bolster their brand, showcase their corporate culture and, ultimately, interest people in applying for jobs.

HR leaders see such tactics as a way to more effectively and efficiently attract both active and passive job candidates. Active candidates are looking for work; passive candidates aren’t trying to find new jobs but are receptive to considering opportunities.

The emergence of social media recruiting in the first two decades of the 21st century tracks with the rise of social media itself. Employers quickly recognized that vast numbers of working-age individuals — particularly those in the Millennial generation and now Gen Z as that cohort enters the workforce — spend significant time on social media sites. According to the marketing and consumer data company Statista, the average daily social media use of internet users worldwide was 144 minutes per day in 2019, up from 142 minutes the previous year.

The growth of both social media and social media recruiting has similarly given rise to HR professionals who specialize in this area. The symbiotic relationship between social media and enterprise recruiting efforts also means that social media platforms now make recruitment features and tools available to organizations. Additionally, HR professionals can use third-party software to manage, enhance and support their social media recruitment activities.

What is Processor?

A processor is an integrated electronic circuit that performs the calculations that run a computer. A processor performs arithmetical, logical, input/output (I/O) and other basic instructions that are passed from an operating system (OS). Most other processes are dependent on the operations of a processor.

The terms processor, central processing unit (CPU) and microprocessor are commonly linked as synonyms. Most people use the word “processor” interchangeably with the term “CPU” nowadays, it is technically not correct since the CPU is just one of the processors inside a personal computer (PC).

The Graphics Processing Unit (GPU) is another processor, and even some hard drives are technically capable of performing some processing.

Processors are found in many modern electronic devices, including PCs, smartphones, tablets, and other handheld devices. Their purpose is to receive input in the form of program instructions and execute trillions of calculations to provide the output that the user will interface with.

A processor includes an arithmetical logic and control unit (CU), which measures capability in terms of the following:

  • Ability to process instructions at a given time.
  • Maximum number of bits/instructions.
  • Relative clock speed.

Masscan is a blazingly fast, portable port scanner that can be useful for pen testing and finding the attack vectors that are lurking inside your network. This tool is capable of transmitting 10 million packets per second from a single machine, and it features an easily understood interface. Uses asynchronous tranmissions and allows arbitrary port and address ranges.

Computerphile Youtube Channel offers a mixed bag of informative videos on all things computer-related.

BatchPatch enables you to load a list of computers and then start the Windows update install/reboot process on all of them at once. Gives you efficient, centralized control of your patching process to save you time.

Command prompt to open administrative console: perfmon /rel

CloneApp is a simple backup tool for migrating your software setup when you need to reinstall Windows. Saves you the time you’d otherwise spend putting things back the way you like them. Offers the option for a full backup/restore or backing up settings for the most-popular Windows programs.

Knowledge of terms to know

What is Maturity Grid (Maturity Model)?

A maturity grid, also called a maturity model, is an assessment tool for evaluating an organization’s level of progress towards a goal.

The grid, which is a matrix laid out in rows and columns, typically lists the criteria that will be evaluated in the left-hand column. Each column’s corresponding row has cells that describe, in a few words, the typical behavior exhibited by an organization at each level of development. Typically a maturity model has ten rows or less, with the first row defining entry level and the last row defining fully-developed best practice.

Maturity grids can be used to provide an organization with an initial benchmark for how close to ‘fully developed’ an organization is in regards to the criteria being assessed. They are also useful tools for leading discussions and providing management with roadmap for next steps.

What is Database Security?

Database security refers to the collective measures used to protect and secure a database or database management software from illegitimate use and malicious cyber threats and attacks.

Database security procedures are aimed at protecting not just the data inside the database, but the database management system and all the applications that access it from intrusion, misuse of data, and damage.

It is a broad term that includes a multitude of processes, tools and methodologies that ensure security within a database environment.

Database security covers and enforces security on all aspects and components of databases. This includes:

  • Data stored in database.
  • Database server.
  • Database management system (DBMS).
  • Other database workflow applications.

Database security is generally planned, implemented and maintained by a database administrator and or other information security professional.

What is API gateway?

An API gateway is programming that sits in front of an application programming interface (API) and acts as a single point of entry for a defined group of microservices. Because a gateway handles protocol translations, this type of front-end programming is especially useful when clients built with microservices make use of multiple, disparate APIs.

A major benefit of using API gateways is that they allow developers to encapsulate the internal structure of an application in multiple ways, depending upon use case. This is because, in addition to accommodating direct requests, gateways can be used to invoke multiple back-end services and aggregate the results.

Because developers must update the API gateway each time a new microservice is added or removed, it is important that the process for updating the gateway be as lightweight as possible. This is why when evaluating API gateways, it’s important for developers to look at features the vendor has added to differentiate its product from the competition.

In addition to exposing microservices, popular API gateway features include functions such as:

  • authentication
  • security policy enforcement
  • load balancing
  • cache management
  • dependency resolution
  • contract and service level agreement (SLA) management

What is Parser?

A parser is a compiler or interpreter component that breaks data into smaller elements for easy translation into another language. A parser takes input in the form of a sequence of tokens, interactive commands, or program instructions and breaks them up into parts that can be used by other components in programming.

A parser usually checks all data provided to ensure it is sufficient to build a data structure in the form of a parse tree or an abstract syntax tree.

In order for the code written in human-readable form to be understood by a machine, it must be converted into machine language. This task is usually performed by a translator (interpreter or compiler). The parser is commonly used as a component of the translator that organizes linear text in a structure that can be easily manipulated (parse tree). To do so, it follows a set of defined rules called “grammar”.

What is Data Definition Language (DDL)?

A data definition language (DDL) is a computer language used to create and modify the structure of database objects in a database. These database objects include views, schemas, tables, indexes, etc.

This term is also known as data description language in some contexts, as it describes the fields and records in a database table.

The present database industry incorporates DDL into any formal language describing data. However, it is considered to be a subset of SQL (Structured Query Language). SQL often uses imperative verbs with normal English such as sentences to implement database modifications. Hence, DDL does not show up as a different language in an SQL database, but does define changes in the database schema.

It is used to establish and modify the structure of objects in a database by dealing with descriptions of the database schema. Unlike data manipulation language (DML) commands that are used for data modification purposes, DDL commands are used for altering the database structure such as creating new tables or objects along with all their attributes (data type, table name, etc.).

Commonly used DDL in SQL querying are CREATE, ALTER, DROP, and TRUNCATE.

What is File Management System?

A file management system is used for file maintenance (or management) operations. It is is a type of software that manages data files in a computer system.

A file management system has limited capabilities and is designed to manage individual or group files, such as special office documents and records. It may display report details, like owner, creation date, state of completion and similar features useful in an office environment.

A file management system is also known as a file manager.

Data on every computer is stored in a complex hierarchical file system constituted of directories and subdirectories beneath them. Files are stashed inside these directories, usually following pre-determined hierarchical structures determined by a program’s instructions.

However, many other files such as pictures, videos and documents are arranged by the user according to his or her own will. A file management system ultimately is the software used to arrange these files, move them, and work with them. In fact, file management systems take care of how the files are organized rather than just how they are stored.

A file management system’s tracking component is key to the creation and management of this system, where documents containing various stages of processing are shared and interchanged on an ongoing basis. It consists of a straightforward interface where stored files are displayed. It allows the user to browse, move, and sort them according to different criteria such as date of last modification, date of creation, file type/format, size, etc.

What is Should-cost Analysis (Should-cost Review)?

Should costing is an analysis, conducted by a customer, of the supplier’s expenses involved in delivering a product or service or fulfilling a contract. The purpose of should-cost analysis is assessing an appropriate figure to guide negotiations or to compare with a figure provided by a supplier. Should costing is often used in procurement.

The processes in should-cost analysis can be labor-intensive. In a large organization, the engineering department typically arrives at a “should” cost for a product. Often this estimate is achieved by reverse engineering to determine the cost of parts. Then the additional costs of labor, materials, overhead and profit margin are added to that estimate.

Should costing can complement strategic sourcing or be used as an alternative method. In strategic sourcing, the traditional means of pricing, a company compares the quotes of at least three competitors for a product.

Promoted by the United States Department of Defense (DoD), should costing has become an integral part of the government procurement process in the United States with its inclusion in the Federal Acquisition Regulations (FAR).

What is CPQ software (configure price quote software)?

CPQ (configure, price, quote) is programming that helps sales representatives and self-service customers quickly generate accurate quotes for configurable products and services.

CPQ software is typically used to generate quotes for products and services that have a lot of feature options. The software is rules-based and can be customized to address multiple variables that will affect profit margins.

Key features of CPQ software include:

  • The ability to create a bill of materials or service level agreement (SLA) based on a particular customer’s configuration choices.
  • The ability to quickly adjust a quote in response to a change request.
  • The inclusion of templates that can be used to digitally generate proposal documents and contracts.

As companies grow, it can become increasingly difficult to manage product pricing. Configuring a quote can be a time-consuming task, especially when there are a lot of variables for a product or service. CPQ software is often an add-on to CRM platforms and operates alongside traditional CRM tabs such as sales forecasts and inventory.

What is Wireless Local Area Network (WLAN)?

A wireless local area network (WLAN) is a wireless distribution method for two or more devices. WLANs use high-frequency radio waves and often include an access point to the Internet. A WLAN allows users to move around the coverage area, often a home or small office, while maintaining a network connection.

A WLAN is sometimes called a local area wireless network (LAWN).

WLAN should not be confused with the Wi-Fi Alliance’s Wi-Fi trademark. First of all, although some may use the terms “Wi-Fi” and “WLAN” interchangeably, there are some semantic differences in play. Where “Wi-Fi connection” refers to a given wireless connection that a device uses, the WLAN is the network itself, which is different.

Also, “Wi-Fi” is not a technical term, but is described as a superset of the IEEE 802.11 standard and is sometimes used interchangeably with that standard. However, not every Wi-Fi device actually receives Wi-Fi Alliance certification, although Wi-Fi is used by more than 700 million people through about 750,000 Internet connection hot spots. The hot spots themselves also constitute WLANs, of a particular kind.

What is Red team-blue team?

Red team-blue team is a simulation and training exercise where members of an organization are divided into teams to compete in combative exercises. In information security (infosec), the exercise is designed to identify vulnerabilities and find security holes in a company’s infrastructure. The war games are also used to test and train security staff.

Generally, members of the security team split into two groups: a red team and a blue team. The red team plays the role of a hostile force and the blue team plays defense as the organization. The red team’s goal is to find and exploit weaknesses in the organization’s security as the blue team works to defend the organization by finding and patching vulnerabilities and responding to successful breaches.

The terms red team and blue team are often used to refer to cyberwarfare in contrast to conventional warfare. War games function as a means of testing for the worst case scenarios of coordinated, focused attacks by skilled attackers. While testing infrastructure and personnel are common in branches of the military, they are increasingly popular in enterprise, government, finance, critical infrastructure and key resources (CIKR) and many other security and IT-focused institutions.

What is Cyclic Redundancy Check (CRC)?

The cyclic redundancy check (CRC) is a technique used to detect errors in digital data. As a type of checksum, the CRC produces a fixed-length data set based on the build of a file or larger data set. In terms of its use, CRC is a hash function that detects accidental changes to raw computer data commonly used in digital telecommunications networks and storage devices such as hard disk drives.

This technique was invented by W. Wesley Peterson in 1961 and further developed by the CCITT (Comité Consultatif International Telegraphique et Telephonique). Cyclic redundancy checks are quite simple to implement in hardware, and can be easily analyzed mathematically. CRC is one of the better techniques that is commonly used in detecting common transmission errors.

CRC is based on binary division and is also called “polynomial code checksum.”

In the cyclic redundancy check, a fixed number of check bits, often called a checksum, are appended to the message that needs to be transmitted. The data receivers receive the data, and inspect the check bits for any errors.

Mathematically, data receivers evaluate the check value that is attached by finding the remainder of the polynomial division of the contents transmitted. If it seems that an error has occurred, a negative acknowledgement is transmitted asking for data retransmission.

A cyclic redundancy check is also applied to storage devices like hard disks. In this case, check bits are allocated to each block in the hard disk. When the computer reads a corrupt or incomplete file, a cyclic redundancy error gets triggered. The CRC can come from another storage device or from CD/DVDs. The common reasons for errors include system crashes, incomplete or corrupt files, or files with lots of bugs.

CRC polynomial designs depend on the length of the block that is supposed to be protected. Error protection features can also determine the CRC design. Resources available for CRC implementation can have an impact on performance.

Another way to understand CRC is to look at the specific words in its name. Experts point out that a CRC is called “redundant” because it adds to the size of the data set without adding new information, and “cyclical” because it works on a system of cyclical implementations.

It’s also helpful to point out that CRC is a specific type of checksum, as mentioned, in which arbitrary sized data sets are mapped to a fixed-size string, which an engineer may call a hash function. Some technology builders do report the use of CRC as a hash function in hash security, although others consider it insufficient and suggest a standard like SHA 256.

By contrast, checksums themselves can be abundantly simple—for instance, a primitive checksum can simply be a sum of the byte values in question. The CRC using its cyclical setup is generally recognized as a pretty good strategy for checking against errors and verifying data integrity. It’s part of an evolved toolkit in checksum use and hashing, and in file checking in general.

Another skill set prized in the tech world is the ability to fix or resolve CRC errors because these errors can inhibit access to data. When a CRC error occurs, for whatever reason, fixing it will be part of the IT service provider’s mandate.

Published by Lisa Turnbull

, always been a Windows lover since her childhood days. I have always been enthusiastic about emerging technologies, especially Artificial Intelligence (AI), Data Science and Machine Learning. I am working as a freelancer on numerous technical projects.