Knowledge of terms to know
What is Intelligent Edge?
Intelligent edge is a term describing a process where data is analyzed and aggregated in a spot close to where it is captured in a network. The intelligent edge, also described as “intelligence at the edge,” has important ramifications for distributed networks including the internet of things (IoT).
With an intelligent edge, remote or decentralized nodes of a system are empowered to do different kinds of data handling that may have traditionally been handled at a central point in a system. Specifically with IoT, a classical model of routing all of the many streams of data from IoT-connected devices into a central data warehouse or repository has several distinct disadvantages. It may be inefficient, and, if the data is not encrypted, it can also leave the system inherently more vulnerable.
In an intelligent edge setup, the edge network components or nodes can process the data intelligently, possibly bundling, refining or encrypting it for transit into the data warehouse. This can improve the agility of data-handling systems, as well as their safety. Many cloud providers and other companies knowledgeable about the structure and nature of IoT are recommending the use of an intelligent edge for these reasons.
What is Maturity Model (Maturity Grid)?
A maturity model, also called a maturity matrix or maturity grid, is an assessment tool for evaluating an organization’s level of progress towards a goal.
Maturity models provide a benchmark for how close to ‘fully developed’ the organization is in regards to the criteria being assessed. The ultimate goal of conducting a maturity model assessment is to create a roadmap for success.
The matrix is laid out in rows and columns and typically lists maturity levels in the left-hand column. Each row describes, in a very few words, the typical behavior exhibited at a particular level of development.
Typically a maturity model has ten rows or less. The first row defines what constitutes entry level and the last row explains what fully-developed, mature best practices should look like.
Here is an example of a maturity grid for disaster recovery.
|Level 0||Naught||No disaster recovery strategy exists. Technology may or may not be in place.|
|Level 1||Initial||A disaster recovery strategy exists and technology is in place.|
|Level 2||Repeatable||The technology supporting DR has been successfully tested numerous times.|
|Level 3||Defined||The DR plan is documented in detail.|
|Level 4||Managed||Disaster recovery requirements are understood and met.|
|Level 5||Optimized||DR plans are closely aligned with business goals. Plans can be adapted to meet requirements for growth and change.|
What is Analytics of Things?
Analytics of Things is the term used to describe the analysis of the data generated by the Internet of Things devices. In other words, analytics of the Internet of Things is Analytics of Things. Analytics of Things is required so as to make the connected devices smart and to give the devices the ability to make intelligent decisions.
Analytics of Things is still evolving and needs significant time and effort for achieving real business value. Like all other analytics, Analytics of Things comprises data collection and analytics. There are different categories of Analytics of Things, such as understanding patterns and the analysis for variation, detection of anomalies, predictive asset maintenance, optimization by analysis of a procedure or process, prescription and situational awareness.
There are many challenges for Analytics of Things. With the large volume of data generated by the Internet of Things, only a limited data is needed and would be considered meaningful data. So proper strategies are needed for achieving clean analytics without having to process junk data. With the devices used for capturing data, security and privacy needs to be assured in order to protect the integrity of the system. Another challenge for Analytics of Things are the standardization and protocol challenges. A standardization of the communication protocols needs to exist for the devices associated with Internet of Things.
Analytics of Things can help enterprises in ensuring the devices connected to internet work more efficiently and are smarter. Analytics of Things can be a huge asset in predictive analytics, especially in many industrial sectors such as traffic, medical and manufacturing.
What is Key Performance Indicator (KPI)?
A key performance indicator (KPI) is a business metric used by corporate executives and other managers to track and analyze factors deemed crucial to the success of an organization.
Key performance indicators shine a light on how well a business is doing. Keeping employees focused on business initiatives and tasks that are central to organizational success could also be challenging without designated KPIs to reinforce the importance and value of those activities.
KPIs that measure the results of business activities, such as quarterly profit and revenue growth, are referred to as lagging indicators because they track things that have already occurred. In contrast, KPIs for upcoming business developments are known as leading indicators. There’s also a difference between quantitative indicators that have a numerical basis and qualitative indicators that are more abstract and open to interpretation.
KPIs differ from organization to organization based on business priorities and even the KPIs followed most closely by different people in the same organization can vary depending on their role in the organization. For example, a CEO might consider profitability to be the most important performance measurement for a company, while the vice president of sales could view the ratio of sales wins vs. losses as the highest priority KPI.
What is H-1B?
H-1B is a United States Immigration Service visa classification that permits employers to hire highly skilled foreign workers who possess theoretical and practical application of a body of specialized knowledge.
To be eligible for an H-1B visa, an alien must have an employer sponsor and hold a bachelor’s degree or the equivalent in the specific specialty. The employer is required to state or demonstrate that a U.S. worker will not be displaced by the H-1B applicant and file a petition with the United States Citizenship and Immigration Services (USCIS) on behalf of the alien. In 2015, two-thirds of the petitions granted in 2015 were for employees in computer-related occupations.
In June 2020, President Trump issued an executive order barring the issuance of any new H-1B work visas. In October 2020, President Donald Trump issued new H-1B work visa rules that include a significant pay raise for high-skilled visa holders.
What is Edge Analytics?
Edge analytics refers to the analysis of data from some non-central point in a system, such as a network switch, peripheral node or connected device or sensor. As an emerging term, “edge analytics” defines the attempt to collect data in decentralized environments.
One way to understand edge analytics is as an alternative to traditional big data analytics, which is performed in centralized ways, through Hadoop clusters or other means, often from a big data warehouse or other central repository. This has been a popular way to drive analytics, but now, data scientists are exploring how edge analytics can work as an effective alternative option.
In some ways, edge analytics goes along with the internet of things (IoT). Experts often describe IoT data as inherently messy or chaotic. There is a need to find the best ways to collect data from distributed systems. Because there is so much work involved in sourcing device data into a central data warehouse, edge analytics has emerged as a time-saving and resource-saving option. Some describe edge analytics as “harnessing” the power of the connected IoT device: the idea is that analysts get the data right from the active device, and not later after it has been filtered into the warehouse. There is also the ability to filter data for long-term storage.
One prominent example of edge analytics is in the use of digitally connected traffic systems. A party, for instance, a law enforcement department, might want data like camera images or sensor speeds, in real time or before the data has trickled into a data warehouse for consistency. CCTV units and other endpoint devices can deliver timely data through edge analytics.
What is Integrated Analytics Platform?
An integrated analytics platform is an integrated solution that brings together performance management, analytics and business intelligence tools in a single package. It provides an end-to-end solution for delivering business intelligence from multiple fronts and gives the user a clear visual representation of data as well as providing services such as revenue calculation, forecasting and developing marketing strategy models and algorithms all on the same system, allowing for interoperability.
An integrated analytics platform gives sales and marketing organizations a competitive edge through analytical insights and information collaboration. The core of the integrated analytics platform is its huge data repository from which all tools and services can access and build on. The way the data warehouse is set up can differ among platform vendors, for example Intel uses a data lake scheme for its data repository, while other vendors use traditional relational data warehouses.
The integrated analytics platform has capabilities for managing the volume or size, velocity and variety of marketing and sales data. This means that it can ingest data from different types and protocols. This provides flexibility, as all aspects of the platform can access this centralized repository for distributed processing and allows for constant evolution of data models and their resulting insights. This also allows for cross-silo collaboration since any data model or algorithm created by one entity can be utilized and built upon by other entities. This collaboration can ensure that the entire organization is informed on any and all intelligence that can be arrived at using the integrated analytics platform.
What is Should-cost Analysis (Should-cost Review)?
A should-cost analysis, also called a should-cost review, is a procurement strategy that requires a B2B customer to proactively research how much it will cost a potential supplier to provide a service or deliver a finished product.
Should-cost reviews are conducted prior to meeting with potential vendors. The goal of this type of review is to gather real-time data that can be used during contract negotiations to identify cost savings opportunities.
Estimates in a should-cost analysis are arrived at by reverse engineering known and estimated price points. First, the cost of parts, labor and overhead are gathered to create a baseline estimate — and then a reasonable profit margin is added to the baseline to arrive at an estimate.
In the United States, should-cost reviews are a standard practice for procuring information technology products and services. Government agencies and potential vendors can find detailed instructions for how to conduct this type of analysis on the Federal Acquisition Regulation (FAR) website.
Depending upon the competitive landscape, should-cost analysis can either replace or complement strategic sourcing initatives, which require price quotes from at least three competitors.
PyCharm brings all the Python tools together for faster, easier coding. Features a keyboard-centric approach, with intelligent code completion, on-the-fly error checking and easy project navigation. Offers PEP8 checks, testing assistance, smart refactorings and a host of inspections that help you write high-quality, maintainable code.
PacketBomb is dedicated to teaching IT professionals how to make the most of their packet analysis tools. Since packet analysis can sort out most network problems pretty quickly, fully developing this skill is extremely valuable for anyone who’s expected to keep networks running properly.
fzf-project is a plugin that makes it easy to switch between project directories that are indexed from a particular folder(s). Kindly shared by the author, who explains: “You specify a number of “workspace” folders in a vimrc variable. The plugin provides a command to switch between any of the contained folders using fzf… once you’ve made that selection, it provides automatic rooting of any edited files within those folders, and a fzf file finder.”
VirtualBox is an open-source virtualization app that enables you to run Windows or Linux apps on a Mac. Offers great speed and flexibility, plus a large library of prebuilt third-party emulated systems.
Nmap Cheatsheet is a comprehensive overview of Nmap and Nessus. Covers usage options for Nmap, scanning command syntax, port specification options, host discovery, scanning types and options, version detection, firewall proofing, output format and timing options, Nmap scripts NSE, 172.16.1.1 specification and commands.
Knowledge of terms to know
What is Advanced Persistent Threat (APT)?
An advanced persistent threat (APT) is a prolonged and targeted cyberattack in which an intruder gains access to a network and remains undetected for an extended period of time.
To maintain access to the targeted network without being discovered, threat actors use sophisticated evasion techniques such as continuously rewriting malicious code. Some APTs are so complex that they require full-time administrators to maintain the compromised systems and software in the targeted networks.
Because a great deal of effort and resources usually go into carrying out APT attacks, intruders typically seek high-value targets, such as nation-states and large corporations, with the ultimate goal of stealing information over a long period of time.
APT attacks can be difficult to identify. To determine whether a network has been the target of an APT attack, cybersecurity professionals often focus on detecting anomalies in outbound data.
What is Ubiquitous Computing?
Ubiquitous computing is a paradigm in which the processing of information is linked with each activity or object as encountered. It involves connecting electronic devices, including embedding microprocessors to communicate information. Devices that use ubiquitous computing have constant availability and are completely connected.
Ubiquitous computing focuses on learning by removing the complexity of computing and increases efficiency while using computing for different daily activities.
Ubiquitous computing is also known as pervasive computing, everywhere and ambient intelligence.
The main focus of ubiquitous computing is the creation of smart products that are connected, making communication and the exchange of data easier and less obtrusive.
What is Smart Television (Smart TV)?
Smart television (smart TV) is TV that provides interactive features similar to those involved in Internet or Web services. This includes the ability to search for video or interact with the television in other ways. This can be done through a set-top box or through internal technology in the television, such as an operating system that commands and controls these interactive features.
Smart TV is also called connected TV or hybrid TV.
One common example of smart TV technology is the streaming of video from sources like Netflix or Hulu. Again, television sets can be shipped from the factory with this interactive technology, or they can be augmented with a cable set-top box or a gaming console that supports these activities. Either way, smart TV operation typically involves internal or external hardware tools that can help users scroll or navigate through a screen in order to view movies, change settings or otherwise control the experience.
In many ways, smart TV is not a specific set of hardware pieces, but a process toward a more interactive design philosophy. It’s easy to see how this smart TV approach has changed what used to be a passive broadcast into a high-design interactive interface. This is taking place within the greater context of new exploration into consumer interfaces, for example, where the tablet touch-screen has become the norm and wearable devices like Google Glass are being seriously considered.
What is FCoE (Fibre Channel over Ethernet)?
FCoE (Fibre Channel over Ethernet) is a storage protocol that enables Fibre Channel (FC) communications to run directly over Ethernet. FCoE makes it possible to move Fibre Channel traffic across existing high-speed Ethernet infrastructure and converges storage and IP protocols onto a single cable transport and interface.
The goal of FCoE is to consolidate I/O (input/output) and reduce switch complexity, as well as to cut back on cable and interface card counts. Adoption of FCoE has been slow, however, due to a scarcity of end-to-end FCoE devices and a reluctance on the part of many organizations to change the way they implement and manage their networks.
What is Fibre Channel switch (FC switch)?
A Fibre Channel switch is a networking device that is compatible with the FC protocol and designed for use in a dedicated storage area network (SAN).
An FC switch inspects a data packet header, determines the computing devices of origin and destination and forwards the packet to the intended system. FC switches come in different types including modular director switches, also known as backbone switches, with a high port count and fixed-port or semi-modular switches, also known as edge switches. FC switches can be combined to create large SAN fabrics that interconnect thousands of servers and storage ports.