Knowledge of terms to know
What is Content Management System (CMS)?
A content management system (CMS) is a software application that enables users to create, edit, collaborate on, publish and store digital content. CMSes are typically used for enterprise content management (ECM) and web content management (WCM).
A CMS provides a graphical user interface with tools to create, edit and publish web content without the need to write code from scratch.
A CMS has two components: a content management application (CMA) and a content delivery application (CDA).
- The CMA is a graphical user interface that enables users to design, create, modify and remove content from a website without HTML knowledge.
- The CDA component provides the back-end services that support management and delivery of the content once a user creates it in the CMA.
“Organizations use a content management system to create and manage targeted content across multiple channels — a need that is steadily rising in many organizations that reach out to their customers through the internet.” – Scott Robinson
Related Terms: headless CMS, content management application, web content management, decoupled CMS, cloud content management
What is Electronics Disposal Efficiency (EDE)?
Electronics disposal efficiency (EDE) is a performance metric used to evaluate the percentage of electronic disposals that have been completed in an environmentally responsible way. It is used to measure how well organizations dispose of their electronic equipment and waste once it is no longer in use or is depleted.
EDE was created by Green Grid to help organizations asses their disposal processes for electronic equipment. To calculate EDE, the total weight of the equipment that’s being disposed of in a responsible way (such as through reuse or recycling) is divided by the total weight of all discarded equipment.
There are a range of options that Green Grid considers responsible, but most involve sending the electronic waste to an organization/entity that is certified and authorized to recycle or dispose of it properly.
What is Waste Electrical and Electronic Equipment (WEEE)?
Waste Electrical and Electronic Equipment (WEEE) is a designation for certain kinds of hardware and other electrical appliances covered by a European Community law called the Waste Electrical and Electronic Equipment Directive. This legislation helps to maintain better control systems for the disposal and reuse of electrical/electronic appliances, parts or systems, which can have a drastic effect on the environment if they are disposed of improperly.
An important feature of the WEEE Directive is the idea of “producer compliance” or the responsibility of hardware makers to prepare for the eventuality of disposal, recycling and reuse. Part of the setup of the WEEE Directive involves distinguishing between electronic goods that were sold prior to 2005, when the law took effect, and those sold after 2005. The WEEE Directive has helped lower the amount of hazardous waste disposed into the general environment in the UK and in European Union member countries.
The WEEE Directive does not apply in the United States. However, American counterparts commonly use the term “electronic waste” or “e-waste” when talking about regulations for the disposal, reuse or recycling of electrical/electronic products or parts that may contain heavy metals like lead, cadmium, beryllium, etc., that can be dangerous to the environment.
What is Real-time Analytics?
Real-time analytics is the use of data and related resources for analysis as soon as it enters the system. The adjective real-time refers to a level of computer responsiveness that a user senses as immediate or nearly immediate. The term is often associated with streaming data architectures and real-time operational decisions that can be made automatically through robotic process automation and policy enforcement.
Whereas historical data analysis uses a set of historical data for batch analysis, real-time analytics instead visualizes and analyzes the data as it appears in the computer system. This enables data scientists to use real-time analytics for purposes such as:
- Forming operational decisions and applying them to production activities — including business processes and transactions — on an ongoing basis.
- Viewing dashboard displays in real time with constantly updated transactional data sets.
- Utilizing existing prescriptive and predictive analytics
- Reporting historical and current data simultaneously.
Real-time analytics software has three basic components:
- an aggregator that gathers data event streams (and perhaps batch files) from a variety of data sources;
- a broker that makes data available for consumption; and
- an analytics engine that analyzes the data, correlates values and blends streams together.
The system that receives and sends data streams and executes the application and real-time analytics logic is called the stream processor.
“The first — and for many the primary — benefit of real-time data in the enterprise is simply being able to support decisions whenever and wherever they need to be made.” – Donald Farmer
Related Terms: robotic process automation, historical data, edge computing, in-database analytics, in-memory analytics
What is Lean Production?
Lean production is a systematic manufacturing method used for eliminating waste within the manufacturing system. It takes into account the waste generated from uneven workloads and overburden and then reduces them in order to increase value and reduce costs. The word ”lean” in the term simply means no excess, so lean production can be translated simply into minimal waste manufacturing.
Lean production is centered on determining what activities or processes add value by reducing other aspects such as lessening the production of a certain kind of product that gives less value and using the resources to produce more of another, while at the same time lessening waste. It is a management philosophy that was adopted from the Toyota Production System (TPS) created by Taiichi Ohno.
Lean production is all about reducing waste, not just material waste, but labor and time waste generated by some processes. When all of these wastes have been removed from the system, only then can it be said that the system is truly lean and optimized. In short, lean production involves constant efforts to reduce or eliminate waste starting from the design process to the manufacturing, distribution and towards the product support and beyond phases. But it is not just about reducing waste and overhead, the principle of lean production is also about increasing speed, efficiency and improving quality on top of waste elimination. This requires work and the development of a lean culture within the workforce, which ultimately leads to added value both for the customer and the company.
What is Smart Grid?
A smart grid is an electricity network based on digital technology that is used to supply electricity to consumers via two-way digital communication. This system allows for monitoring, analysis, control and communication within the supply chain to help improve efficiency, reduce energy consumption and cost, and maximize the transparency and reliability of the energy supply chain. The smart grid was introduced with the aim of overcoming the weaknesses of conventional electrical grids by using smart net meters.
Many government institutions around the world have been encouraging the use of smart grids for their potential to control and deal with global warming, emergency resilience and energy independence scenarios.
What is Red Teaming?
Red teaming is the practice of rigorously challenging plans, policies, systems and assumptions by adopting an adversarial approach. A red team may be a contracted external party or an internal group that uses strategies to encourage an outsider perspective.
The goal of red teaming is to overcome cognitive errors such as groupthink and confirmation bias, which can impair the decision-making or critical thinking ability of an individual or organization.
A red team is often a group of internal IT employees used to simulate the actions of those who are malicious or adversarial. From a cybersecurity perspective, a red team’s goal is to breach or compromise a company’s digital security. A blue team, on the other hand, is a group of internal IT employees used to simulate the actions of individuals within a given company or organization, often a security team. If the red team poses as a group of cybercriminals, the blue team’s goal is to stop them from committing a hypothetical data breach. This type of interaction is what is known as a red team-blue team simulation.
Red teaming, however, does not exclusively require the existence of a blue team. In fact, this can often be a purposeful decision to compare the active and passive systems of an agency.
Red teaming originated in the military to realistically evaluate the strength and quality of strategies by using an external perspective. Since then, red teaming has become a common cybersecurity training exercise used by organizations in the public and private sectors. Other security testing methods include ethical hacking and penetration testing, or pen testing. While the red team shares the same goal and perspective of these strategies, their execution is often quite different.
“The goal of red team engagements is not just to test the environment and the systems within the environment, but to test the people and processes of the organization as well.” – Charles Shirer
Related Terms: cognitive bias, red team-blue team, ethical hacker, penetration testing, vulnerability assessment
What is Camper?
A camper is a video gamer who finds a strategic spot within a level and waits there for players, game-controlled enemies or choice items to appear. This strategy is known as camping.
Camping is most popular in first-person shooter (FPS) games, but depending on the game being played, it is usually considered a form of cheating, or at least a degenerative strategy. This is because if every single player follows a camping strategy, then there won’t be any possibilities for players to confront each other, leaving no game to play. Some FPS games can be customized to employ a camping strategy where entire groups are dedicated to sniper-only challenges.
There are two basic types of camping:
- Spawn Point Camping: Spawn point campers wait at a point in the level map where a desirable object such as a weapon appears. In role-playing games, spawn point campers may also wait in areas where they know enemies will reappear, killing them quickly for easy experience points and money.
- Snipers: In competitive first-person shooters, a camper finds an ideal vantage point where he or she can wait for players to wander into sight and be killed. A sniper usually camps out on high ground or an area that provides protection from other players circling behind.
Camping is most popular among shooter games, where the players hide themselves in a strategic spot for a very long time, thereby killing opponents and grabbing a tactical edge. The spot a camper chooses is usually hidden from casual view and perhaps partially or fully secured by an object. The campers then make use of this spot to ambush or perform sniper attacks on opponents. The time scale for camping may vary with different players or with the player’s response to various game conditions. Some games do not encourage camping, forcing the campers who remain stationary for too long to move ahead, or applying strict penalties like small quantities of periodic health damage.
In a mixed playing environment where camping is frowned upon, bunny hopping (erratic jumping and running) becomes the usual response by players who are tired of being picked off by an unseen enemy.
What is Principle Of Least Privilege (POLP)?
The principle of least privilege (POLP) is a concept in computer security that limits users’ access rights to only what are strictly required to do their jobs. Users are granted permission to read, write or execute only the files or resources necessary to do their jobs. This principle is also known as the access control principle or the principle of minimal privilege.
POLP can also restrict access rights for applications, systems and processes to only those who are authorized.
Depending on the system, some privileges may be based on attributes contingent on the user’s role within the organization. For example, some corporate access systems grant the appropriate level of access based on factors such as location, seniority or time of day. An organization can specify which users can access what in the system, and the system can be configured so the access controls recognize only the administrators’ role and parameters.
“The principle of least privilege, an essential aspect of IT security, is one of the most important security policies an enterprise needs to enforce.” – Michael Cobb
Related Terms: claims-based identity, privilege creep, privilege bracketing, data classification, superuser
What is Citizen development?
Citizen development is a business process that encourages non-IT-trained employees to become software developers, using IT-sanctioned low-code/no-code (LCNC) platforms to create business applications. This approach to software development enables employees — despite their lack of formal education in coding — to become citizen developers. They create and customize existing software programs to suit a user’s specific needs and improve operational efficiency within a company.
These LCNC platforms include the necessary lines of code, so users simply drag and drop icons to create and update applications. These platforms’ simple visual tools allow different application functions to connect components across business units, apply actions, test to make sure the programming works as expected and publish the new code.
Citizen developers are empowered business users who create new or change existing business applications without the need to involve IT departments. In the past, employees seeking approval of even the smallest project faced frustration as their request languished in an overburdened IT department, sometimes for months. During that wait, internal business priorities most likely changed and the business’s competition continued to adapt. Citizen developers are more agile, responding quickly to the dynamic business landscape.
The citizen development approach not only speeds innovation and the application development process, it also reduces backlogs and frees IT personnel to prioritize and grapple with more pressing businesswide issues. This approach simultaneously addresses security problems associated with shadow IT and third-party apps through transparency, resource sharing and monitoring among citizen developers and IT professionals.
“Citizen development is inevitable, as the need for software outpaces the professional programming resources available to make it.” – Tom Nolle
Related Terms: low-code/no-code development platform, data citizen, mobile application development platform, model-driven development, no-code
What is Progression Gameplay?
Progression gameplay is a game design term that refers to video game mechanics in which the designer sets a course of action that a player must complete to move forward in the game. Progression gameplay depends heavily on checkpoints that a character must reach to advance to the next level. These checkpoints vary according to the game genre. Some general checkpoints include:
- Defeating the level boss in action, adventure and role-playing games (RPGs)
- Finishing in the top three on a particular track in racing games
- Completing a series of puzzles in a puzzle game
- Destroying the enemies’ home base in real-time strategy games
The majority of games are built according to a progression gameplay model. Progression gameplay is popular with designers because it allows them to craft a solid storyline around the action of the game. The goal of every game is to be both immersive and fun to play. Supporters of progression gameplay point out that, because the designers know the course a game will take, they can build a much deeper and more complex story around that course.
On the opposite side, proponents of emergent gameplay want games where the random actions of the players affect both the story and the world they take place in, leading to limitless possibilities rather than a limited number of outcomes that are mapped out by designers. There is, of course, a lot of middle ground between the two approaches. Many games have elements of both progression and emergent gameplay.
What is Massively Multiplayer Online Game (MMOG)?
A massively multiplayer online game (MMOG) refers to videogames that allow a large number of players to participate simultaneously over an internet connection. These games usually take place in a shared world that the gamer can access after purchasing or installing the game software. The explosive growth in MMOGs has prompted many game designers to build online multiplayer modes into many traditionally single-player games.
Massively multiplayer online role playing games (MMORPGs) are one the most popular forms of a MMOG, but the concept goes far beyond a single genre. In addition to RPGs and real-time strategy (RTS) games, the online gameplay has become an essential feature in many first person shooters (FPS), racing games and even fighting games. For many gamers, the ability to compete with players from all over the world in a variety of online-only game modes overshadows the single player mode that many of these games were originally designed around.
What is Degenerate Strategy?
A degenerate strategy is a way of playing a video game that exploits an oversight in gameplay mechanics or design. Degenerate strategies apply to player-versus-player (PvP) as well as player versus environment (PvE) games. Degenerate strategies do not break the rules of a game like a code or a cheat, but they do prevent the game from being experienced in the manner intended by the game designer.
Nearly every game has degenerate strategies that a gamer can exploit for the easy win, kill or level-up.
Common degenerate strategies include:
- Finding the most powerful and difficult special attack to block in a fighting game and then using only that move in every round
- Finding the spawn points for items and enemies in a roleplaying game and camping there for easy kills, experience points and cash
- Leveling up a character beyond its natural progression by constantly refighting battles
- Finding the range at which the enemy’s artificial intelligence operates and using long-range attacks from the edge of that range
- Memorizing repetitive game elements, such as layouts or attack patterns, and exploiting their weaknesses
A degenerate strategy is not a gaming principle violation because no such principle exists. Degenerate strategies are merely alternative gameplay approaches that appeal to two types of gamers: those who want to play as efficiently as possible and those looking for shortcuts to beat the game.
What is Real-Time Strategy (RTS)?
Real-time strategy (RTS) refers to a time-based video game that centers around using resources to build units and defeat an opponent. Real-time strategy games are often compared to turn-based strategy games, where each player has time to carefully consider the next move without having to worry about the actions of his opponent. In real-time strategy games, players must attempt to build their resources, defend their bases and launch attacks while knowing that the opponent is scrambling to do the same things.
A real-time strategy game may also be referred to as a real-time simulation or a real-time war game.
Real-time strategy games introduce new pressures into strategy and war games because they require players to make quick decisions as far as how to use resources and time attacks. Some of the most popular RTS game series’ include “Starcraft”, “Warcraft”, “Command and Conquer”, “Warhammer” and “Age of Empires”.
Although real-time strategy games move much more quickly than turn-based games, gameplay can still be slowed down by degenerate strategies like turtling.
What is IT Asset Management (information technology asset management, or ITAM)?
IT asset management (information technology asset management, or ITAM) is a set of business practices that combines financial, inventory and contractual functions to optimize spending and support lifecycle management and strategic decision-making within the IT environment. ITAM is often a subset of the IT service management (ITSM) process.
An IT asset is classified as any company-owned information, system or hardware that is used in the course of business activities. The IT asset management process typically involves gathering a detailed inventory of an organization’s hardware, software and network assets and then using that information to make informed business decisions about IT-related purchases and redistribution.
ITAM applications are available to organizations to assist in the ITAM process. These applications can detect the hardware, software and network assets across an organization and then capture, record and make the data available as needed. Some of these applications integrate ITAM with the service desk, keeping all the user and access information together with the incidents and requests.
“ITAM is a continual and systematic process, so the asset lifecycle concept is often used to structure activities and help support decision-making.” – Brian Holak
Related Terms: IT management, total cost of ownership (TCO), IT asset lifecycle, enterprise asset management, digital asset management
What is Emergent Gameplay?
Emergent gameplay is a game design term that refers to video game mechanics that change according to the player’s actions. Emergent gameplay includes a number of relatively simple decisions that a player must make, the sum of which lead to more complex outcomes. Emergent gameplay can also be created by adding multiple players to the same game environment and having their individual actions impact the overall game narrative. Similarly, more complex artificial intelligence capable of impacting the storyline in unpredictable ways can be used in lieu of additional players.
Emergent gameplay was originally limited to allowing gamers to choose between branching paths within a progressive game. The result was a slightly different ending based on the player’s choices. Role-playing games introduced a new level of emergent gameplay by allowing actions to impact the game’s narrative in a more noticeable way. For example, choosing to save or not save a certain character may unlock new abilities, block off certain stages or paths of progression completely, and ultimately result in a completely different experience.
A newer breed of emergent gameplay has emerged from puzzle-based gaming as well. In these games, players are given the ability to make custom solutions to puzzles. There may be only a set number of solutions, but the player’s approach is analyzed and used to create more challenging puzzles based on that player’s tendencies.
The field of emergent gameplay is becoming more and more popular as online gaming platforms offer more opportunities to create a complex gaming environment out of relatively simple gameplay mechanics. Many progression-based games now mix in elements of emergent gameplay.
What is Unstructured Data?
Unstructured data is information, in many different forms, that doesn’t hew to conventional data models and thus typically isn’t a good fit for a mainstream relational database. Thanks to the emergence of alternative platforms for storing and managing such data, it is increasingly prevalent in IT systems and is used by organizations in a variety of business intelligence and analytics applications.
Traditional structured data, such as the transaction data in financial systems and other business applications, conforms to a rigid format to ensure consistency in processing and analyzing it. Sets of unstructured data, on the other hand, can be maintained in formats that aren’t uniform, freeing analytics teams to work with all of the available data without necessarily having to consolidate and standardize it first. That enables more comprehensive analyses than would otherwise be possible.
One of the most common types of unstructured data is text. Unstructured text is generated and collected in a wide range of forms, including Word documents, email messages, PowerPoint presentations, survey responses, transcripts of call center interactions, and posts from blogs and social media sites.
Other types of unstructured data include images, audio and video files. Machine data is another category, one that’s growing quickly in many organizations. For example, log files from websites, servers, networks and applications — particularly mobile ones — yield a trove of activity and performance data. In addition, companies increasingly capture and analyze data from sensors on manufacturing equipment and other internet of things (IoT) connected devices.
In some cases, such data may be considered to be semi-structured — for example, if metadata tags are added to provide information and context about the content of the data. The line between unstructured and semi-structured data isn’t absolute, though; some data management consultants contend that all data, even the unstructured kind, has some level of structure.
“New sources of data, such as IoT sensors, satellite imagery, drone-captured data, security cameras and voice recording systems, are producing enormous quantities of unstructured data every day.” – Rich Castagna
Related Terms: structured data, machine data, semi-structured data, RDBMS, BLOB
What is Grinding?
Grinding refers to the playing time spent doing repetitive tasks within a game to unlock a particular game item or to build the experience needed to progress smoothly through the game. Grinding most commonly involves killing the same set of opponents over and over in order to gain experience points or gold. Although other game genres require some grinding, role-playing games (RPG) – specifically massively multiplayer online role-playing games – are the most notorious for requiring this type of time investment from players.
A game level at which a lot of grinding is required may be called a treadmill level.
Requiring that players grind out experience points by fighting the same opponents over and over seems to run contrary to good game design. However, there are two elements to grinding that have made its inclusion inevitable in almost every RPG. These are:
- The Achievement Factor: Players often feel a sense of achievement when they have ground their way up to a level where progression through the game becomes relatively easy. Knowing this, game designers include achievements outside of pure level progression. For example, defeating 100 opponents results in a new title (slayer, destroyer, master assassin or something similar) and possibly other rewards. Similar milestones may occur at 200, 300, 500, 1,000, and so on.
- An Even Playing Field: Grinding allows players who are less skillful to catch up to and progress/compete with players who are better. In this way, no player is prevented from progressing through the game. The boredom of grinding is nothing compared to the boredom of being stuck with no ability to progress.
Grinding can be tedious, but it gives players a reason to keep playing rather than giving up and walking away.
What is Scrum Master?
A Scrum Master is a facilitator for an Agile development team. They are responsible for managing the exchange of information between team members. Scrum is a project management framework that enables a team to communicate and self-organize to make changes quickly, in accordance with Agile principles.
Although the scrum analogy was first applied to manufacturing in a paper by Hirotaka Takeuchi and Ikujiro Nonaka, the approach is often used in Agile software development and other types of project management. The term comes from the sport rugby, where opposing teams huddle together during a scrum to restart the game. In product development, team members huddle together each morning for a stand-up meeting where they review progress and essentially restart the project.
A Scrum Master leads a scrum. Scrums are daily meetings conducted by Agile, self-organizing teams that allow the team to convene, share progress and plan for the work ahead. Some teams have a fixed Scrum Master, while others alternate the role with various team members occupying the position on different days. No one approach is right, and teams can choose to appoint the Scrum Master role as best fits their needs.
During the daily meetings, the Scrum Master asks the team members three questions:
- What did you do yesterday?
- What will you do today?
- Are there any impediments in your way?
The Scrum Master then uses the answers to those questions to inform tactical changes to the team’s process, if necessary.
“Some organizations choose to hire Scrum Masters as consultants instead of designating an in-house employee. The added benefit of hiring an external Scrum Master is that they do not have preexisting biases about the organization and can bring fresh ideas.” – Ben Lutkevich
Related Terms: Agile software development, project management, stand-up, Scrum sprint, Certified ScrumMaster (CSM)
What is Tech ethicist?
Tech ethicist is a corporate role that involves examining a company’s technologies to ensure that they meet ethical standards, that they do not exploit user vulnerabilities, for example, or infringe upon user rights. The term also refers to independent experts.
Although there is no standard education stream for tech ethicists yet, to fill that role an individual would need grounding in not only ethics and technology but also psychology, law and sociology, among other things. Tech ethicist David Polgar likes to compare the tasks of engineers and ethicists: Engineers see a problem and find a solution, after which the ethicist sees the solution and looks for problems.
Technology ethics is an increasingly important area of focus as the sophistication and capacities of technologies have advanced far ahead of concerns for security, privacy and the well-being of users. The humane tech movement seeks to change that focus to realign technology with humanity. As that movement develops, the demand for tech ethicists is likely to grow.
“Data breaches and privacy scandals have made it vital for organizations to prioritize tech ethics and consider the effects IoT devices and AI will have on individuals and society.” – Jessica Groopman
Related Terms: data protection officer (DPO), AI code of ethics, responsible AI, California Consumer Privacy Act, General Data Protection Regulation
What is First Person Shooter (FPS)?
A first person shooter (FPS) is a genre of action video game that is played from the point of view of the protagonist. FPS games typically map the gamer’s movements and provide a view of what an actual person would see and do in the game.
A FPS usually shows the protagonist’s arms at the bottom of the screen, carrying whatever weapon is equipped. The gamer is expected to propel his avatar through the game by moving it forward, backward, sideways and so on using the game controller. Forward movements of the controller result in the avatar moving forward through the scenery, usually with a slight left-right rocking motion to properly simulate the human gait. In order to increase the level of realism, many games include the sounds of breathing and footsteps in addition to the regular sound effects.
FPS games can be played in two general modes, mission or quest mode and multiplayer mode. The mission mode is usually the default mode for a single player. It usually involves the player battling through progressively harder game levels towards some ultimate goal. The multiplayer mode involves multiple gamers participating via a network and playing in a shared game environment. The multiplayer mode can take many forms, including:
- Deathmatches
- Capture the flag
- Team deathmatch
- Search and destroy
- Base (a.k.a assault or headquarters)
- Last man standing
First person shooter primarily refers to the perspective of the game. There are other genres that also use the first person perspective occasionally, including racing games and boxing games. Shooters, that is games where various weapons (but mostly guns) are used to kill opponents, have been made using all the basic gaming perspectives: first person, third person, side scrolling, top-down and 3/4.
The first FPS was “Maze War”, developed in 1973. However, it was the 1992 “Wolfenstein 3D” game that really entrenched the concept. Some of the most influential FPS games include “Doom”, “Quake” and the “Half-Life: Counter Strike ” series. All gained dedicated followers.
There is a wide variety of FPS games on the market. Many can be played on different platforms, including PCs, gaming consoles and handheld devices.
What is Symmetric Encryption?
Symmetric encryption is a form of computerized cryptography using a singular encryption key to guise an electronic message. Its data conversion uses a mathematical algorithm along with a secret key, which results in the inability to make sense out of a message. Symmetric encrpytion is a two-way algorithm because the mathematical algorithm is reversed when decrypting the message along with using the same secret key.
Symmetric encryption is also known as private-key encryption and secure-key encryption.
The two types of symmetric encryptions are done using block and stream algorithms. Block algorithms are applied to blocks of electronic data. Specified set lengths of bits are transformed, while simultaneously using the selected secret key. This key is then applied to each block. However, when network stream data is being encrypted, the encryption system holds the data in its memory components waiting for the blocks in their entirety. The time in which the system waits can yield a definite security gap, and may compromise data protection. The solution involves a process where the block of data could be lessened and combined with previous encrypted data block contents until the rest of the blocks arrive. This is known as feedback. When the entire block is received, then it is encrypted.
Conversely, stream algorithms are not held in the encryption system memory, but arrive in data stream algorithms. This type of algorithm is considered somewhat more secure, since a disk or system is not holding on to the data without encryption in the memory components.
What is Microlearning (Microtraining)?
Microlearning is an educational strategy that breaks complex topics down into short-form, stand-alone units of study that can be viewed as many times as necessary, whenever and wherever the learner has the need. Microlearning instructional modules are designed to be consumed in about five minutes and address one specific skill or knowledge gap topic.
The convenience of microlearning, from both the learner and the educator’s point of view, has made this type of instructional delivery popular in corporate learning environments. Scientific research suggests that a self-directed, modular approach to talent pipeline development improves knowledge retention. It also empowers employees by giving them the opportunity to build new skills directly in the context of the job they are being paid to do, without having to take time away from their job to attend training.
Although microlearning is most often associated with independent learning, modules can also be strung together to create guided learning experiences for individuals or small groups. The small chunks of instructional content can be tagged with metadata for easy search, access and reuse.
“Microlearning modules are most often accessed as the need for knowledge arises, but they can also be assigned as part of an employee’s monthly or quarterly goals.” – Wesley Chai
Related Terms: talent pipeline, learning experience platform (LXP), knowledge base, learning management system (LMS), chief learning officer (CLO)
What is Sandbox?
A sandbox is a style of game in which minimal character limitations are placed on the gamer, allowing the gamer to roam and change a virtual world at will. In contrast to a progression-style game, a sandbox game emphasizes roaming and allows a gamer to select tasks. Instead of featuring segmented areas or numbered levels, a sandbox game usually occurs in a “world” to which the gamer has full access from start to finish.
A sandbox game is also known as an open-world or free-roaming game.
Sandbox games can include structured elements – such as mini-games, tasks, submissions and storylines – that may be ignored by gamers. In fact, the sandbox game’s nonlinear nature creates storyline challenges for game designers. For this reason, tasks and side missions usually follow a progression, where tasks are unlocked upon successful task completion.
Sandbox game types vary. Massive multiplayer online role-playing games (MMORPG) generally include a mixture of sandbox and progression gaming and heavily depend on emergent interactive user gameplay for retaining non-progression-focused gamers. Modern “beat ’em ups” and first-person shooters have delved more deeply into the sandbox realm with titles like the “Grand Theft Auto” series, “Red Dead Redemption,” “Assassin’s Creed” and others, allowing gamers to run and gun wherever the mood takes them.
In spite of their name, various sandbox games continue to impose restrictions at some stages of the game environment. This can be due the game’s design limitations, or can be short-run, in-game limitations, such as some locked areas in games that are unlocked once certain milestones are achieved.
What is Massively Multiplayer Online Role-Playing Game (MMORPG)?
A massively multiplayer online role-playing game (MMORPG) is a video game that takes place in a persistent state world (PSW) with thousands, or even millions, of players developing their characters in a role-playing environment. The virtual world in which the game takes place is never static. Even when a player is logged off, events are occurring across the world that may impact the player when he or she logs in again.
Unlike traditional console-based role-playing games where the overarching goal is to complete the game, MMORPGs depend on emergent game play based on the interactions of players and groups of players. Most MMORPGs still provide tasks and battles that get progressively harder, but the primary purpose of these is to help gamers build up their characters in terms of experience, abilities and wealth.
To keep the gaming experience evolving, MMORPGs allow players to form alliances, interact within the game, customize their avatars and even create some of the game content. Moreover, players who are not interested in entering dungeons and battles to build up their characters can still participate in the game by setting up shops in villages and cities to contribute to the authenticity of the game’s world.
MMORPGs also have their own economies, where players can use the virtual currency they’ve earned in battles to buy items. This virtual economy has crossed over into the real world in some areas. For example, MMORPG players have exchanged real currency for items and virtual currency. In some instances, players seeking to level-up their characters more quickly have employed farmers – gamers who play as another person’s character – who work to earn experience points for their employers while they are logged off.
What is Hybrid Workforce?
A hybrid workforce is a type of blended workforce comprising employees who work remotely and those who work from an office or central location. This way, employees can work from the places they want to, either in a central location, such as a warehouse, factory or retail location, or in a remote location, such as the home.
However, a hybrid workforce isn’t just about working from home or working from the office; rather, it’s about helping employees achieve a flexible work-life balance.
Hybrid workforces enable employees to work in a setting that’s most comfortable for them. If workers feel they are more productive in one location versus another, they can choose to work in that environment — or work in a combination of the two.
The hybrid workplace model has also put the health, safety and psychological needs of workers first by allowing for social distancing during the COVID-19 pandemic. A survey from Enterprise Technology Research (ETR) expected the number of employees working from home to double into 2021 and after the eventual post-pandemic landscape. Additionally, at their Directions 2021 virtual conference, IDC predicted employees will have a choice about including a hybrid workplace model — with about 33% of employees still having to work on site full time.
“Hybrid workplace models blend remote work with in-office work. Instead of structuring work around desks in a physical office space, hybrid work generally enables employees to structure work around their lives.” – Linda Rosencrance
Related Terms: human capital management (HCM), diversity, equity and inclusion (DEI), team collaboration tools, contingent workforce, workforce management
What is gig economy?
A gig economy is a free market system in which temporary positions are common and organizations hire independent workers for short-term commitments. The term “gig” is a slang word for a job that lasts a specified period of time; it is typically used by musicians. Examples of gig employees in the workforce could include work arrangements such as freelancers, independent contractors, project-based workers and temporary or part-time hires.
There has been a trend toward a gig economy in recent years. There are a number of forces behind the rise in short-term jobs. For one, the workforce is becoming more mobile and work can increasingly be done remotely via digital platforms. As a result, job and location are being decoupled. That means that freelancers can select among temporary jobs and projects around the world, while employers can select the best individuals for specific projects from a larger pool than what’s available in any given area.
Digitization has also contributed directly to a decrease in jobs as software replaces some types of work to maximize time efficiency. Other influences include financial pressures on businesses leading to a flexible workforce and the entrance of the millennial generation into the labor market. People tend to change jobs several times throughout their working lives, especially millennials, and the gig economy can be seen as an evolution of that trend.
The gig economy is part of a shifting cultural and business environment that also includes the sharing economy, the gift economy and the barter economy. The cultural impact of the gig economy continues to change, for example, with the advent of COVID-19 in 2020 — where the pandemic has had a large influence.
“In a gig economy, businesses save resources in terms of benefits, office space and training. They also have the ability to contract with experts for specific projects who might be too high-priced to maintain on staff.” – Alexander S. Gillis
Related Terms: hybrid workforce, contingent workforce, ghost worker, fractional CIO , workforce planning
What is Symmetric Encryption?
Symmetric encryption is a form of computerized cryptography using a singular encryption key to guise an electronic message. Its data conversion uses a mathematical algorithm along with a secret key, which results in the inability to make sense out of a message. Symmetric encrpytion is a two-way algorithm because the mathematical algorithm is reversed when decrypting the message along with using the same secret key.
Symmetric encryption is also known as private-key encryption and secure-key encryption.
The two types of symmetric encryptions are done using block and stream algorithms. Block algorithms are applied to blocks of electronic data. Specified set lengths of bits are transformed, while simultaneously using the selected secret key. This key is then applied to each block. However, when network stream data is being encrypted, the encryption system holds the data in its memory components waiting for the blocks in their entirety. The time in which the system waits can yield a definite security gap, and may compromise data protection. The solution involves a process where the block of data could be lessened and combined with previous encrypted data block contents until the rest of the blocks arrive. This is known as feedback. When the entire block is received, then it is encrypted.
Conversely, stream algorithms are not held in the encryption system memory, but arrive in data stream algorithms. This type of algorithm is considered somewhat more secure, since a disk or system is not holding on to the data without encryption in the memory components.
What is Video Conferencing?
Video conferencing is live, visual connection between two or more remote parties over the internet that simulates a face-to-face meeting. Video conferencing is important because it joins people who would not normally be able to form a face-to-face connection.
At its simplest, video conferencing provides transmission of static images and text between two locations. At its most sophisticated, it provides transmission of full-motion video images and high-quality audio between multiple locations.
In the business world, desktop video conferencing is a core component of unified communications (UC) applications and web conferencing services, while cloud-based virtual meeting room services enable organizations to deploy video conferencing with minimal infrastructure investment.
The video conferencing process can be split into two steps: compression and transfer.
During compression, the webcam and microphone capture analog audiovisual (AV) input. The data collected is in the form of continuous waves of frequencies and amplitudes. These represent the captured sounds, colors, brightness, depth and shades. In order for this data to be transferred over a normal network — instead of requiring a network with massive bandwidth — codecs must be used to compress the data into digital packets. This enables the captured AV input to travel faster over broadband or Wi-Fi internet.
During the transfer phase, the digitally compressed data is sent over the digital network to the receiving computer. Once it reaches the endpoint, the codecs decompress the data. The codecs convert it back into analog audio and video. This enables the receiving screen and speakers to correctly view and hear the AV data.
“When the pandemic hit and everyone shifted to video, the major concern was simply connecting so that business could continue. But after almost a year of seeing ourselves looking like ghouls with bad lighting, or just occupying that same boring video box, people are looking to jazz things up.” – David Maldow
Related Terms: Unified Communications, compression, codec, virtual meeting room, latency
What is Hybrid Encryption?
Hybrid encryption is a mode of encryption that merges two or more encryption systems. It incorporates a combination of asymmetric and symmetric encryption to benefit from the strengths of each form of encryption. These strengths are respectively defined as speed and security.
Hybrid encryption is considered a highly secure type of encryption as long as the public and private keys are fully secure.
A hybrid encryption scheme is one that blends the convenience of an asymmetric encryption scheme with the effectiveness of a symmetric encryption scheme.
Hybrid encryption is achieved through data transfer using unique session keys along with symmetrical encryption. Public key encryption is implemented for random symmetric key encryption. The recipient then uses the public key encryption method to decrypt the symmetric key. Once the symmetric key is recovered, it is then used to decrypt the message.
The combination of encryption methods has various advantages. One is that a connection channel is established between two users’ sets of equipment. Users then have the ability to communicate through hybrid encryption. Asymmetric encryption can slow down the encryption process, but with the simultaneous use of symmetric encryption, both forms of encryption are enhanced. The result is the added security of the transmittal process along with overall improved system performance.
What is Project Charter (PC)?
A project charter (PC) is a document that states a project exists and provides the project manager with written authority to begin work.
The document helps the project manager to communicate his authority and explain to project participants and stakeholders why the project is needed, who it involves, how long the project will take to complete, how much it will cost, what resources are needed and how successful completion of the project will help the organization. Once created, the document is rarely (if ever) amended.
Depending on a company’s culture and management style, a charter may serve the same purpose as a business case. In a large corporation, the charter may be a multi-page document written up by mid-level management after a business case has been approved but before the project scope has been defined. In a small startup company, however, the charter might just be a few paragraphs with bulleted items and the company president’s signature.
” What usually happens – and is perfectly fine – is that the project manager will write an outline or even the charter itself, and ask the sponsor to review it.” – Tamar Freundlich
Related Terms: project management framework, project charter, PMP, PMO, CAPM
What is Advanced Encryption Standard (AES)?
The Advanced Encryption Standard (AES) is a symmetric-key block cipher algorithm and U.S. government standard for secure and classified data encryption and decryption.
In December 2001, the National Institute of Standards (NIST) approved the AES as Federal Information Processing Standards Publication (FIPS PUB) 197, which specifies application of the Rijndael algorithm to all sensitive classified data.
The Advanced Encryption Standard was originally known as Rijndael.
The AES has three fixed 128-bit block ciphers with cryptographic key sizes of 128, 192 and 256 bits. Key size is unlimited, whereas the block size maximum is 256 bits. The AES design is based on a substitution-permutation network (SPN) and does not use the Data Encryption Standard (DES) Feistel network.
In 1997, the NIST initiated a five-year algorithm development process to replace the DES and Triple DES. The NIST algorithm selection process facilitated open collaboration and communication and included a close review of 15 candidates. After an intense evaluation, the Rijndael design, created by two Belgian cryptographers, was the final choice.
The AES replaced the DES with new and updated features:
- Block encryption implementation
- 128-bit group encryption with 128, 192 and 256-bit key lengths
- Symmetric algorithm requiring only one encryption and decryption key
- Data security for 20-30 years
- Worldwide access
- No royalties
- Easy overall implementation
What is Encryption Algorithm?
An encryption algorithm is a component for electronic data transport security. Actual mathematical steps are taken and enlisted when developing algorithms for encryption purposes, and
varying block ciphers are used to encrypt electronic data or numbers.
Encryption algorithms help prevent data fraud, such as that perpetrated by hackers who illegally obtain electronic financial information. These algorithms are a part of any company’s risk management protocols and are often found in software applications.
Encryption algorithms assist in the process of transforming plain text into encrypted text, and then back to plain text for the purpose of securing electronic data when it is transported over networks. By coding or encrypting data, hackers or other unauthorized users are generally unable to access such information. Some encryption algorithms are considered faster than others, but as long as algorithm developers, many of whom have math backgrounds, stay on top of advancements in this technology, this type of encryption should continue to flourish as hackers continue to become more sophisticated.
In 1977, RSA became one of the first encryption algorithms developed by U.S. mathematicians Ron Rivest, Adi Shamir and Len Adleman. RSA has had ample staying power as it is still widely used for digital signatures and public key encryption. Encryption algorithms can vary in length, but the strength of an algorithm is usually directly proportional to its length.
What is Big Data?
Big data is a combination of structured, semistructured and unstructured data collected by organizations that can be mined for information and used in machine learning projects, predictive modeling and other advanced analytics applications.
Systems that process and store big data have become a common component of data management architectures in organizations. Big data is often characterized by the 3Vs: the large volume of data in many environments, the wide variety of data types stored in big data systems and the velocity at which the data is generated, collected and processed. These characteristics were first identified by Doug Laney, then an analyst at Meta Group Inc., in 2001; Gartner further popularized them after it acquired Meta Group in 2005. More recently, several other Vs have been added to different descriptions of big data, including veracity, value and variability.
Although big data doesn’t equate to any specific volume of data, big data deployments often involve terabytes (TB), petabytes (PB) and even exabytes (EB) of data captured over time.
Companies use the big data accumulated in their systems to improve operations, provide better customer service, create personalized marketing campaigns based on specific customer preferences and, ultimately, increase profitability. Businesses that utilize big data hold a potential competitive advantage over those that don’t since they’re able to make faster and more informed business decisions, provided they use the data effectively.
For example, big data can provide companies with valuable insights into their customers that can be used to refine marketing campaigns and techniques in order to increase customer engagement and conversion rates.
“Ultimately, the value and effectiveness of big data depend on the workers tasked with understanding the data and formulating the proper queries to direct big data analytics projects.” – Bridget Botelho
Related Terms: big data analtyics, data types, unstructured data, semi-structured data, structured data, 5-V’s of big data
What is Deep learning?
Deep learning is a collection of algorithms used in machine learning, used to model high-level abstractions in data through the use of model architectures, which are composed of multiple nonlinear transformations. It is part of a broad family of methods used for machine learning that are based on learning representations of data.
Deep learning is a specific approach used for building and training neural networks, which are considered highly promising decision-making nodes. An algorithm is considered to be deep if the input data is passed through a series of nonlinearities or nonlinear transformations before it becomes output. In contrast, most modern machine learning algorithms are considered “shallow” because the input can only go only a few levels of subroutine calling.
Deep learning removes the manual identification of features in data and, instead, relies on whatever training process it has in order to discover the useful patterns in the input examples. This makes training the neural network easier and faster, and it can yield a better result that advances the field of artificial intelligence.
What is CompTIA Security+?
CompTIA Security+ is a certification for basic security practices and functions.
The Computing Technology Industry Association (CompTIA) advertises this security certification as one of the first security-based certifications information technology professionals should earn.
CompTIA does not set prerequisites for the Security+ certification. There are no age or educational requirements, but CompTIA does recommend that a candidate has at least two years of IT administration experience with a focus on security. This certification exam can be taken online or in person at a designated test center. The test takes 90 minutes to complete and requires a passing score of 750 out of 900.
The CompTIA Security+ certification is good for three years and covers subject areas such as:
- Threats and vulnerabilities
- Risk management and mitigation
- Threat detection and management
- Network security
- Application and host security
- Operational security
- Access control
- Identity management
- Cryptography
It is important to note that CompTIA Security+ gets updated every three years to meet any changes to industry needs. Certification renewals help ensure that IT pros have the skills needed for modern cybersecurity jobs.
“The CompTIA Security+ exam focuses on best practices for risk management and mitigation…Anyone looking to start their path in the security field should consider beginning their journey with the CompTIA Security+ certification.” – Alexander S. Gillis
Related Terms: incident response, penetration testing, risk mitigation, IAM, cloud engineer, access control
What is Advanced Encryption Standard (AES)?
The Advanced Encryption Standard (AES) is a symmetric-key block cipher algorithm and U.S. government standard for secure and classified data encryption and decryption.
In December 2001, the National Institute of Standards (NIST) approved the AES as Federal Information Processing Standards Publication (FIPS PUB) 197, which specifies application of the Rijndael algorithm to all sensitive classified data.
The Advanced Encryption Standard was originally known as Rijndael.
The AES has three fixed 128-bit block ciphers with cryptographic key sizes of 128, 192 and 256 bits. Key size is unlimited, whereas the block size maximum is 256 bits. The AES design is based on a substitution-permutation network (SPN) and does not use the Data Encryption Standard (DES) Feistel network.
In 1997, the NIST initiated a five-year algorithm development process to replace the DES and Triple DES. The NIST algorithm selection process facilitated open collaboration and communication and included a close review of 15 candidates. After an intense evaluation, the Rijndael design, created by two Belgian cryptographers, was the final choice.
The AES replaced the DES with new and updated features:
- Block encryption implementation
- 128-bit group encryption with 128, 192 and 256-bit key lengths
- Symmetric algorithm requiring only one encryption and decryption key
- Data security for 20-30 years
- Worldwide access
- No royalties
- Easy overall implementation
What is Project Management Office (PMO)?
A project management office (PMO) is a group or department within a business, agency or enterprise that defines and maintains standards for project management within the organization.
The primary goal of a PMO is to achieve benefits from standardizing and following project management processes, policies and methods. For the office to be most effective, it should embody the organization’s culture and strategy.
The popularity of the office has increased, as more companies with PMOs have received returns on investment.
Nearly seven in 10 organizations globally have a PMO, a figure that has remained constant for five consecutive years, according to the 2016 Pulse of the Profession report by the Project Management Institute (PMI).
“Whether it’s software development projects, network infrastructure deployments, or PC and system upgrades, there’s a lot of opportunity across the IT landscape for those with the right IT project management certifications and courses under their belt.” – Steve Zurier
Related Terms: Agile project management, project planning, project management framework, project management professional, Project Management Body of Knowledge
What is Data Encryption Standard (DES)?
The data encryption standard (DES) is a common standard for data encryption and a form of secret key cryptography (SKC), which uses only one key for encryption and decryption. Public key cryptography (PKC) uses two keys, i.e., one for encryption and one for decryption.
In 1972, the National Bureau of Standards (NBS) approached the Institute for Computer Sciences and Technology (ICST) to devise an encryption algorithm to secure stored and transmitted data. The algorithm would be publicly available, but its key would be top secret.
The National Security Agency (NSA) assisted with the cryptographic algorithm evaluation processes, and in 1973, submission invitations were posted in the Federal Register. However, the submissions were unacceptable. In 1974, a second invitation was posted, which resulted in a submission from IBM. In 1975, technical specifications were published for comments in the Federal Register, and analysis and review commenced. In 1977, NBS issued the algorithm, i.e., DES, as Federal Information Processing Standards (FIPS) 46.
Shortly thereafter, the U.S. Department of Defense (DoD) implemented DES. Specifications are outlined in FIPS publication 46-3, FIPS 81, ANSI X3.92 and ANSI X3.106. For security reasons, the U.S. government has never authorized exports of this encryption software.
There are at least 72 quadrillion DES key possibilities. In 1993, NIST recertified DES, and the Advanced Encryption Standard (AES) became its unofficial replacement.
What is Automatic machine learning (AutoML)?
Automatic machine learning (AutoML) is a general discipline that involves automating any part of the entire process of the machine learning application. By working with various stages of the machine learning process, engineers develop solutions to expedite, enhance and automate parts of the machine learning pipeline.
Automatic machine learning is also known as automated machine learning.
Some automatic machine learning techniques and tools are geared toward expediting and automating data preparation – the aggregation of overall data from various sources. Other parts of this process are aimed at feature engineering – feature selection and feature extraction are a big part of how machine learning algorithms work. Automating these can further improve the machine learning design process.
Another part of automatic machine learning is hyperparameter optimization, which is done through various means. Engineers can use metaheuristics techniques like simulated annealing or other processes to make automatic machine learning happen. The bottom line is that automatic machine learning is a broad catch-all term for any technique or effort to automate any part of the machine learning “end to end” process.
Free Tool
perfSONAR is an open-source network measurement toolkit that provides visibility to the nuances of your network to help with debugging. It offers federated coverage of paths and helps establish end-to-end usage expectations.
Security Content Automation Protocol (SCAP) is a compliance checker tool for evaluating the hardening of your machines. It used to be available only for DoD, government or contractor use but was recently released to the public by DISA. This automated program scans a machine (locally or remotely) to determine security posture based on Security Technical Implementation Guidelines (STIGs)—the checklists that identify what constitutes an open or closed vulnerability and how to remediate it.
Squid is a caching proxy for the Web that supports HTTP, HTTPS, FTP and more to reduce bandwidth and improve response times. It can route content requests to servers in a wide variety of ways to build cache server hierarchies that optimize network throughput. Offers extensive access controls and runs on most available operating systems.
gProfiler is an easy-to-use, open-source tool that produces a unified visualization of what your CPU is working on, displaying stack traces of your processes across native programs, Java and Python runtimes and kernel routines. It’s a lightweight combination of different sampling profilers that requires minimal overhead, so it can be truly continuous. You can even upload results to the Granulate Performance Studio, which aggregates results from different instances over different periods to provide a holistic view of what is happening on your entire cluster. Comes with a pre-made Container image, and needs no changes or modifications to get started.
Network UPS Tools provides support for assorted power devices, like UPSs and PSUs. It offers many control and monitoring features and a uniform control and management interfaces. Covers over 140 manufacturers and thousands of power device models.
CherryTree is a hierarchical, wiki-style notetaking application for organizing your notes, bookmarks, source code, and more. Features rich text, syntax highlighting, and the ability to prioritize information.
Kate is a feature-packed editor for viewing and editing text files. Offers a wide variety of plugins, including an embedded terminal that can launch console commands, a powerful search and replace, on-the-fly spellcheck, and a preview that shows how your MD, HTML, and SVG will look. Supports highlighting for 300+ languages understand how brackets work and help navigate complex code block hierarchies.
Apcupsd is designed for power management and control of most of APC’s UPS models on Unix and Windows machines. During a power failure, it notifies users that a shutdown may occur. If power is not restored, a system shutdown will follow when the battery is exhausted, a timeout (seconds) expires, or runtime expires based on internal APC calculations determined by power consumption rates. Apcupsd works with most of the Smart-UPS models and most simple signaling models such as Back-UPS and BackUPS-Office.
Pping measures the roundtrip delay that application packets experience relative to any capture point on the connection’s path, using the naturally occurring reflected signal that can be obtained when the timestamp option is used in a TCP connection. These delays are collected per TCP connection with outbound packets providing the signal and inbound packets the reflection, and Pping measures the delay of two different round-trips from the monitored packets.
ipcalc is a simple way to calculate the broadcast, network, Cisco wildcard mask and host range for any IP address/netmask—presenting the subnetting results in easy-to-understand binary values.
TortoiseGIT is an open-source Windows Shell Interface to Git that offers overlay icons showing the file status, a powerful context menu for Git and more. Works with whatever development tools you like and with any type of file. The primary means of interaction with TortoiseGit is through the context menu of Windows Explorer.
IPinfo allows you to quickly pinpoint user locations, customize their experiences, prevent fraud, ensure compliance and more.
Blog
PrajwalDesai.com is the place where the author—a Microsoft MVP and server technology expert—shares his knowledge and helpful technical information. You’ll find lots of posts and videos on SCCM, LYNC, Exchange and more, with detailed explanations including screenshots when appropriate to make solutions easier to deploy.
List
Awesome Network Automation is a curated list of fantastic network automation resources that is a real treasure trove for anyone looking for a convenient way to find useful information on network automation.
Training Resource
Pluralsight Skills offers thousands of courses, skill assessments, and more on technology topics like software development, cloud, AI/ML, DevOps, and security—and this month, the entire catalog is free.
How to Become a PowerCLI Superhero – 5 Use-Cases to Get Started is a free webinar that will be hosted by two VMware vExperts on April 21. The event is being produced by Hornetsecurity’s Altaro team and will feature live demonstrations of highly useful applications of PowerCLI – a VMware automation tool. All attendees will receive a free copy of a connected 100+ page PowerCLI eBook.
Tutorials
CsPsProtocol offers a collection of simplified tutorials on core technology topics, including networking, programming, telecom, IoT, and more. The helpful content is original and not available elsewhere.
Linux Upskill Challenge is a month-long course for those who want to work in Linux-related jobs. The course focuses on servers and command line, but it assumes essentially no prior knowledge and progresses gently. This valuable content was offered as a paid course in the past but is now free and fully open source.
Websites
urlscan allows you to scan and analyze websites by submitting a URL to find out if if it is targeting users. It automatically assesses the domains and IPs contacted, the resources (JavaScript, CSS etc.) requested from those domains and additional information about the page itself then takes a screenshot of the page and records the DOM content, JavaScript global variables, cookies created and a lot of other details.
MITRE ATT&CK is a global knowledge base of cybercrime tactics and techniques that is compiled from real-world observations. It is intended to fuel development of threat models and methodologies in the private sector, government and the cybersecurity product and service community.