CISSP - Work note

CISSP, What is it ?

CISSP is a certification created by (ISC)² in 1994. The goal is to validate the subjects of all the domains covered in the Common Body of Knowledge (sometime named CBK). The CISSP cover the following domains :
  1. Security and Risk Management
  2. Asset Security
  3. Security Architecture and Engineering
  4. Communication and Network Security
  5. Identity and Access Management (IAM)
  6. Security Assessment and Testing
  7. Security Operations
  8. Software Development Security

Requirement :

In 2018, there is almost 128.000 CISSP worldwide. Member Counts

The Exam

It's more theory and concept, but sometime, you have to know some technical aspect, especially in cryptography and network. It's vendor neutral but sometime vendor or product are mentionned.
Exam is passed through a software, CISSP-CAT (Computerized Adaptive Testing). The number of questions is between 100 and 150 and must be achieved in 3 hours max. Quesions have to be answered as it's not possible to go back to a previously encountered question. The CAT will pick questions in a large base of questions depending of the answers. If the person is bad in a domain, the CAT will send questions about this domain. The questions are made of 4 choices with a single right answer, sometime multiple answer are needed but it's specified.
It's possible to try the exam 3 times per year will at least 30 days between each try. The cost of the exam in 2018 is around 650€, or 699$ or 560£. Multiple langage are available, but as most of the ressource and vocabulary are in english, it's a good choice to pass it in english.
Bring ID card and voucher.

Advice

1 - Security and Risk Management

Operation Security Triplets

Due Care is using reasonable care to protect the interest of an organization. Due care is a legal liability concept that define that define the minimum level of information protection that a business must achieve.
Due Diligence is practicing the activities that maintain the due care effort. Practicing due diligence is a defense againt negligence.

Risk Management

To calculate a risk, a basic formula is :
Risk = Threats (number) x vulnerabilities (number) x Impact (cost in dollar if the asset is lost) .

Different Planning

Threat Modeling

Threat modeling is the process of identifying, understanding, and categorizing potential threats, including threats from attack sources.

Risk Assessment

MTD is a measurement to indicate how long the company can be without a specific resource. General MTD estimates are:

Defense in Depth : The idea behind the defense in depth approach is to defend a system against any particular attack using several independent methods. It is a layering tactic, conceived by the National Security Agency (NSA) as a comprehensive approach to information and electronic security. For example, using a cage, a firewall, an antivirus and IDS for a server is Defense in Depth. Even using different type of control (physical, logical and administrative) is an exemple of defense in depth.

Standard, Baseline, Policies, Procedures

Access Control

Each of the access control categorie can be from one of the following type :

Employees Data

In European Union, the following principles must be applied in regard to the datas collected by a organization about its employees :

GDPR and Privacy Shield


GDPR is a regulation in EU law on data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). It also addresses the export of personal data outside the EU and EEA areas. The GDPR aims primarily to give control to individuals over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU.
This is the main items of the GDPR :

EU–US Privacy Shield :
In October 2015 the European Court of Justice declared the previous framework called the International Safe Harbor Privacy Principles invalid. Soon after this decision the European Commission and the U.S. Government started talks about a new framework and on February 2, 2016 they reached a political agreement. The European Commission published the "adequacy decision" draft, declaring principles to be equivalent to the protections offered by EU law.

Intellectual Property

2 - Asset Security

IT asset management (ITAM) is the set of business practices that join financial, contractual and inventory functions to support life cycle management and strategic decision making for the IT environment. Assets include all elements of software and hardware that are found in the business environment.
IT asset management (also called IT inventory management) is an important part of an organization's strategy. It usually involves gathering detailed hardware and software inventory information which is then used to make decisions about hardware and software purchases and redistribution. IT inventory management helps organizations manage their systems more effectively and saves time and money by avoiding unnecessary asset purchases and promoting the harvesting of existing resources. Organizations that develop and maintain an effective IT asset management program further minimize the incremental risks and related costs of advancing IT portfolio infrastructure projects based on old, incomplete and/or less accurate information.
Inventory management deals with what assets are there, where they reside and who owns them.
Configuration management adds a relationship dynamic relating the other items in the inventory. This VM is in this ESX in this rack for example.

The stages of data management process :
  1. Capture/Collect
  2. Digitalization
  3. Storage
  4. Analysis
  5. Presentation
  6. Use
FIPS 199 helps organizations categorize their information systems.

List of criteria to classify data : The U.S. government/military classification : The commonly used commercial or private classification :

List of Breach/vulnerabilities Families

Data anonymization

To protect privaty information, it's common to modify data to make it harder, or impossible, to link with the original person.

Security Testing and Evaluation

FISMA require every government agencies to pass a Security Testing and Evaluation, a process that contain 3 categories :
  1. Management Controls focus on risk assessment, for example doing a risk assessment every year is a management control.
  2. Operational Controls focus on process executed by human. Checking a policy and how it is enforced is an operational controls.
  3. Technical Controls focus on process executed or configured on a machine. Configure system to ask for password change every 60 days is a technical control.

3 - Security Architecture and Engineering

Access Control Models Security Models

Security Evaluation Methods

ITIL

ITIL is an operational framework created by CCTA, requested by the UK's gov in the 1980s. ITIL provide documentation on IT best practice to improve performance, productivity and reduce cost.
It's divided into the 5 mains following categorie :

  1. Service Strategie
  2. Service Design
  3. Service Transition
  4. Service Operation
  5. Continual Service Improvement

Misc

ISO 27001 is derived from BS 7799. It's focused on Security Governance.

ISO 27002 is derived from BS 7799. It's a security standard that recommend security controls based on industry best practices.

It is to be noted that the CMM, while originally create to develop software, can be adapted to handle the security management of a company. Each phase correspond to a certain level of maturity in the documentation and the control put in place.
The first phase, initial, is where there is no process, no documentation, no control in place. The team reply to each incident by reacting to it.
At the last phase, optimizing, the process are sophisticated and the organization is able to adapt to new threats. Every step are covered in Chapter 8.

Covert Timing Channel conveys information by altering the performance of a system component in a controlled manner. It's very difficult to detect this type of covert channel. Covert Storage Channel is writting to a file accessible by another process. To avoid it, the read/write access must be controlled.

A nonce, short for number used once, is an arbitrary number that can be used just once in a cryptographic communication. It is often a random or pseudo-random number issued in an authentication protocol to ensure that old communications cannot be reused in replay attacks. They can also be useful as initialization vectors and in cryptographic hash functions.

An initialization vector (IV) is an arbitrary number that can be used along with a secret key for data encryption. This number, also called a nonce, is employed only one time in any session.
The use of an IV prevents repetition in data encryption, making it more difficult for a hacker using a dictionary attack to find patterns and break a cipher.

DRAM use capacitor to store information, unlike SRAM that use flip-flops. DRAM require power to keep information, as it constantly need to be refreshed because the capacitor leak charge over time.
DRAM is cheaper but slower than SRAM.

CVE is the part of SCAP that provide a naming system to describe security vulnerabilities.
CVSS is a free and open industry standard for assessing the severity of computer system security vulnerabilities. CVSS attempts to assign severity scores to vulnerabilities, allowing responders to prioritize responses and resources according to threat. Scores are calculated based on a formula that depends on several metrics that approximate ease of exploit and the impact of exploit. Scores range from 0 to 10, with 10 being the most severe. CVSS metrics is influnced by three groups of metrics :

  1. Base metrics indicate the severity of the vulnerability is given by the vendor or the entity that found the vulnerability. It have the largest influence on the CVSS score.
  2. Temporal metrics indicate the urgency of the vulnerability, it's also given by the vendor or the entity that found the vulnerability.
  3. Environmental metrics is set by the end-user. It indicate how an environment or end-users organization is impacted. It's optional.

The mase metrics are used to calculate the temporal metrics which are used to calculate the environmental metrics.
XCCDF is the SCAP component that describe security checklist.

SABSA Matrix :
  1. Contextual
  2. Conceptual
  3. Logical
  4. Physical
  5. Component
  6. Operational

Processor Ring

  1. Kernel
  2. OS components
  3. Device drivers
  4. Users
Application in Ring 0 can access data in Ring1, Ring2 and Ring3. Application in Ring1 can access data in Ring2 and 3. Application in Ring 2 can access data in Ring3.

Boolean Operator

Cipher, Encryption, Hash, Protocol

Encryption

Hash

Encryption/Hash Summary

NameTypeKey LengthBloc LenghtHash LengthRemark
AES Symmetric Block Cipher 128, 192, 256 128
Blowfish Symmetric Block Cipher 32–448 64
Twofish Symmetric Block Cipher 128, 192, 256 128
DES Symmetric Block Cipher 56 + 8 parity bits 64 DES have multiple mode, ranked from better to worse :
CTR
OFB is the stream version of DES.
CFB
CBC
ECB is the weakest, leave pattern in the ciphertext.
3DES Symmetric Block Cipher 56, 112, 168 64 3DES is just DES applied 3 times.
The key length depend if each DES round use a different key or not.
option 1 : DES(key1) + DES(key2) + DES(key3)
option 2 : DES(key1) + DES(key2) + DES(key1)
option 1 : DES(key1) + DES(key1) + DES(key1)
As 3DEs is vulnerable to Meet-in-the-middle attack, the max effective key length is 112 bits
RSA asymmetric ~ As of 2020, the minimum key should 2048. In 2030, it should be 3072.
SHA-1 Hash 160 Cryptographic weaknesses were discovered and the standard was no longer approved for most cryptographic uses after 2010.
SHA-2 Hash 224, 256, 384, 512
SHA-3 Hash 224, 256, 384, 512
MD5 Hash 512 128 Collision can be done in less than a second on a modern computer.

Protocol/Standard

Cryptography remark

The symmetrics algorithms have stronger encryption per key bits than asymmetric algorithms.
For exemple : AES > 3DES > RSA

Key Clustering in cryptography, is two different keys that generate the same ciphertext from the same plaintext by using the same cipher algorithm. A good cipher algorithm, using different keys on the same plaintext, should generate a different ciphertext irrespective of the key length.

Zero-knowledge Proof is a method by which one party (the prover) can prove to another party (the verifier) that they know a value x, without conveying any information apart from the fact that they know the value x. The essence of zero-knowledge proofs is that it is trivial to prove that one possesses knowledge of certain information by simply revealing it; the challenge is to prove such possession without revealing the information itself or any additional information.

Certification and Accreditation

Trust comes first. Trust is built into a system by crafting the components of security. Then assurance (in other words, reliability) is evaluated using certification and/or accreditation processes.

Fire Extinguisher

There is no official standard in the United States for the color of fire extinguishers, though they are typically red, except for class D extinguishers which are usually yellow, water and Class K wet chemical extinguishers which are usually silver, and water mist extinguishers which are usually white.
ClassIntended useMnemotechnic
AOrdinary Combustible
Wood, Paper, etc...
Ash
BFlammable liquids and gasesBarrel
CEnergized electrical equipmentCurrent
DCombustible metalsDynamite
KOils and fatsKitchen

Gas-based fire suppression system

The Montreal Protocols (1989) limit the use of certain type of gas, Halon for example is forbidden. This is list of gas-based fire suppression system :

Pipe system

NFPA standard 75 require building hosting information technology to be able to withstand at least 60 minutes of fire exposure.

Fence, Lighting

NIST standard pertaining to perimeter protection states that critical areas should be illuminated eight feet high and use two foot-candles (2,4 and 0.6 meter), which is a unit that represents the illumination power of an individual light.

4 - Communication and Network Security

OSI Model

The OSI model is a conceptual model that characterizes and standardizes the communication functions of a telecommunication or computing system without regard to its underlying internal structure and technology. Its goal is the interoperability of diverse communication systems with standard protocols. The model partitions a communication system into abstraction layers. The original version of the model defined seven layers.
A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that comprise the contents of that path. Two instances at the same layer are visualized as connected by a horizontal connection in that layer.
  1. Physical
  2. Data Link (frame)
    Example of protocols : ATM, Frame-Relay, PPTP, L2TP
  3. Network (datagram/packet)
    Example of protocols : IPSec
  4. Transport
    The verification of the packets (segments) delivery occurs a this layer.
    Example of protocols : TCP, UDP, TLS, SSL, SCTP, DCCP
  5. Session
    The session layer provides the mechanism for opening, closing and managing a session between end-user application processes, i.e., a semi-permanent dialog. Communication sessions consist of requests and responses that occur between applications.
    Example of protocols : SQL, RPC
  6. Presentation
    It's the first layer after the "packet state". Compression and encryption happen at the presentation layer. Characters encoding too.
  7. Application
    It manage communication between application. HTTP is used between a web server and a browser for example.
    Example of protocols : HTTP, SMTP, DNS
A mnemotechnic sentence to remember the order of the OSI layers :
AAllApplication
PPeoplePresentation
SSeemsSession
TToTransportSegment (TCP), Datagram (UDP)
NNeedNetworkPacket
DDataData LinkFrame
PProcessingPhysical

A view of the frame, datagram, segments :
L2
Data Link, frame
L3
Network, datagram/packet
L4
Transport, segment

TCP/IP Model

TCP/IP is the conceptual model and set of communications protocols used in the Internet and similar computer networks. It is commonly known as TCP/IP because the foundational protocols in the suite are the TCP and the Internet Protocol (IP). It's also modeled in layer :

TCP useful information


A Port scanner is an application designed to probe a server or host for open ports. Such an application may be used by administrators to verify security policies of their networks and by attackers to identify network services running on a host and exploit vulnerabilities.
A port scan or portscan is a process that sends client requests to a range of server port addresses on a host, with the goal of finding an active port; this is not a nefarious process in and of itself. The majority of uses of a port scan are not attacks, but rather simple probes to determine services available on a remote machine.
While port scan or portscan is the action to check all (or a defined list) the TCP/UDP ports on a target, portsweep is the action to the one port but on multiple server.
The result of a port scan fall in one the three following categories : A list of SCAN method :

Port from 0 to 1023 are system-ports, or well known ports.
Port from 1024 to 49151 are registered ports, which are also called user ports. They are assigned by IANA but doesn't require escalated system privilege to be used.
Port from 49152 to 65535 anre dynamic ports.

FTP use port 21 for authentication/control and 20 for the data.

In IPv6, FE80::/10 is used to create unicast link-local address.

Network Attack

DDOS attack occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. Such an attack is often the result of multiple compromised systems (for example, a botnet) flooding the targeted system with traffic. Some DDOS technique below :

Pharming is a DNS attack that consist of try to send a lot of bad entry to a DNS. If a user request the same entry than the attack is trying to spoof, the DNS server may think the attacker packet are in fact reply to the users request.

Phreaking boxes are devices used by phone phreaks to perform various functions normally reserved for operators and other telephone company employees.
Most phreaking boxes are named after colors, due to folklore surrounding the earliest boxes which suggested that the first ones of each kind were housed in a box or casing of that color. However, very few physical specimens of phreaking boxes are actually the color for which they are named.
Today, most phreaking boxes are obsolete due to changes in telephone technology.

Bluetooth

Bluetooth use FHSS, the implementation is named AFH.

The cipher used is named E0, it can use a key up to 128bits, but it have weakness, the key length doesn't improve security and some attack have shown that even with a 128bits, it can be cracked like if the key is only 32bits.

Example of Bluetooth attack :
The different types of Firewall :
Intrusion Detection System are device or software that scan the network or behavior of a system to detect a virus/malware or forbeddin action. There is different type of IDS/IPS : IDS can use different detection method. They often use both methods :

VoIP

Different Attacks :

Cabling


NameStandardCableLengthRemark
100Base-FX802.3u-1995Fiber
1300nm
2kmIt's an old standard? 100BASE-FX is a version of Fast Ethernet over optical fiber.

WAN Line Type

NameBandwidthCableRemark
T11.544Mbps2 pair of shielded coper wire
E12.084Mbps2 pair of shielded coper wire
T344.736Mbps
E334.368Mbps

WIFI

List of 802.11 protocols by frequency :
900MHz2.4GHz5GHz5.9GHz60GHzModulation
802.11aXOFDM
802.11bXDSSS
802.11gXOFDM
802.11nXXOFDM
802.11acX
802.11adX
802.11af
802.11

List of security protocol by 802.11 Protocol :
WEPWPAWPA2
RC4X
TKIPX
AESX

To avoid collision, 802.11 use CSMA/CA, a mecanism where a device that want to start a transmission send a jam request before sending anything else. CSMA/CA also requiere that the receiving device send an acknoledgement once the data are received. If the sender doesn't receive the acknoledgement, it try to resend the data.

Message Integrity Check is a feature of WPA to prevent MITM attack.

5 - Identity and Access Management (IAM)

In the U.S., two data-classification are mostly used :

Subjects are active entity, user or program that manipulate an Object. A user (subject) request a HTTP server (object).

Objects are passive, manipulated by Subject. A database (object) is requested by a reporting program (subject).
It's important to note that an object in a situation can be a subject (and the opposite also) in another situation. If a user request a DB, the user in the subject, the DB is the object. But the DB can request its software version management to check an update. In this case, the DB is the subject and version management is the object.

Need to know is a type of access management to a ressource. For example, a user may have a Top Secret access, he will not be allowed to access all the data at the Top Secret level. He'll be granted to access only the data he is working on. Or the data he need to know.

Least Privilege is principle of allowing every module (such as a process, a user, or a program, depending on the subject) to have access only to nothing, except what they are allowed to access. A simple user must not be administrator, a web server should not be started as root.

Access Control are the measures taken to allow only the authorized subject to acess an object. Most of the time, it should allow authorized users and deny non-authorized users (or non-users...). It's one of the most important domain of the CISSP. The Access Control are separated in 3 categories, Administrative, Technical and Physical.

Permission are different from right in that permission grant levels of access to a particular object on a file system. Permission to read a file for example.

Right grants users the ability to perform specific actions on a system, such as log in, open the administration panel, etc.



Authentication type


Performance Metrics in Biometrics.
Biometrics Method
Biometric function in two mode :
  1. Verification (or Authentication) mode, the system performs a one-to-one comparison of a captured biometric with a specific template stored in a biometric database in order to verify the individual is the person they claim to be.
    For exemple, the user say he is John Doe, the system check he really is John Doe.
  2. Identification mode, the user didn't say who he is, so the system performs a one-to-many comparison against a biometric database in an attempt to establish the identity of an unknown individual. The system will succeed in identifying the individual if the comparison of the biometric sample to a template in the database falls within a previously set threshold. Identification mode can be used either for 'positive recognition' (so that the user does not have to provide any information about the template to be used) or for 'negative recognition' of the person "where the system establishes whether the person is who she (implicitly or explicitly) denies to be".

Throughput, in biometrics term, is the time an authentication took to be completed.

Enrollment, in biometrics term, is the process to register a user in the system, by saving its fingerprint for example.

Cognitive Password is a form of knowledge-based authentication that requires a user to answer a question, presumably something they intrinsically know, to verify their identity. Typical questions are something like "What the name of your first pet", etc.

Oauth 2.0 is an open standard authentication mecanism defined in RFC 6749. Oauth2 is not compatible with OAuth1. It's defined in RFC6749. It's used in the case of site that ask the users to authenticate with gmail or facebook, for example.

Kerberos

Kerberos is an authentication protocol, that function within a realm and use ticket. User authenticate only once, so Kerberos is a SSO system. Kerberos use the UDP port 88 by default. Kerberos also requiere users machines and servers to have a relatively accurate date, because the TGT, the ticket given to an authenticated user by the KDC, are timestamped to avoid replay-attacks.
Kerberos needs to keep the users password in clear.

Each time a client authenticate, a TGT and a session key. The session key is encrypted with the client secret key. When the client needs to access a ressources in the realm, the client decrypt the session key and send it, with the TGT to the TGS. The TGS check in it's base if the users is authorized to access the ressource.

6 - Security Assessment and Testing

Audit


Key elements of an audit report :

IT staff may perform security assessments to evaluate the security of their systems and applications. Audits must be performed by internal or external auditors who are independent of the IT organization. Criminal investigations must be performed by certified law enforcement personnel.

The frequency of an IT infrastructure security audit or security review is based on risk. The existence of sufficient risk must be establishd to warrant the expense of, and interruption caused by, a security audit on a more or less frequent basis.
Asset value and threats are part of risk but are not the whole picture, and assessments are not performed based only on either of these. A high-value asset with a low level of threats doesn’t present a high risk. Similarly, a low-value asset with a high level of threats doesn’t present a high risk. The decision to perform an audit isn’t usually relegated to an administrator, but to the management or security team.

Penetration Testing

  1. Reconnaissance is collecting all the available data about a target. DNS, IP, address, all published site, etc...
  2. Enumeration is scanning and trying to have the maximum of informations (web server version, php (for example) version, etc) from everything obtained from the step 1.
  3. Vulnerability Analysis is searching for a vulnerability from everything obtained on the step 2.
  4. Execution is using the vulnerability obtained from the step 3.
  5. Reporting is done by ethical hackers (white hat). After having done the step 1, 2 and 3, they send a report to the target.

7 - Security Operations

This is the type of law that must be known to work in the IT Security field :

Evidence

Different type of evidence : The five rules of evidence :

To be admissible, evidence must be relevant, material, and competent.

About search warrant :

Electronic Discovery (also e-discovery or ediscovery) refers to discovery in legal proceedings such as litigation, government investigations, or Freedom of Information Act requests, where the information sought is in electronic format (often referred to as electronically stored information or ESI). Electronic discovery is subject to rules of civil procedure and agreed-upon processes, often involving review for privilege and relevance before data are turned over to the requesting party.

Electronic information is considered different from paper information because of its intangible form, volume, transience and persistence. Electronic information is usually accompanied by metadata that is not found in paper documents and that can play an important part as evidence (for example the date and time a document was written could be useful in a copyright case).

The EDRM is a ubiquitous diagram that represents a conceptual view of these stages involved in the e-discovery process.
  1. Identification
    The identification phase is when potentially responsive documents are identified for further analysis and review. To ensure a complete identification of data sources, data mapping techniques are often employed. Since the scope of data can be overwhelming in this phase, attempts are made to reduce the overall scope during this phase - such as limiting the identification of documents to a certain date range or search term(s) to avoid an overly burdensome request.
  2. Preservation
    A duty to preserve begins upon the reasonable anticipation of litigation. During preservation, data identified as potentially relevant is placed in a legal hold. This ensures that data cannot be destroyed. Care is taken to ensure this process is defensible, while the end-goal is to reduce the possibility of data spoliation or destruction. Failure to preserve can lead to sanctions. Even if the court ruled the failure to preserve as negligence, they can force the accused to pay fines if the lost data puts the defense "at an undue disadvantage in establishing their defense."
  3. Collection
    Once documents have been preserved, collection can begin. Collection is the transfer of data from a company to their legal counsel, who will determine relevance and disposition of data. Some companies that deal with frequent litigation have software in place to quickly place legal holds on certain custodians when an event (such as legal notice) is triggered and begin the collection process immediately. Other companies may need to call in a digital forensics expert to prevent the spoliation of data. The size and scale of this collection is determined by the identification phase.
  4. Processing
    During the processing phase, native files are prepared to be loaded into a document review platform. Often, this phase also involves the extraction of text and metadata from the native files. Various data culling techniques are employed during this phase, such as deduplication and de-NISTing. Sometimes native files will be converted to a petrified, paper-like format (such as PDF or TIFF) at this stage, to allow for easier redaction and bates-labeling. Modern processing tools can also employ advanced analytic tools to help document review attorneys more accurately identify potentially relevant documents.
  5. Review
    During the review phase, documents are reviewed for responsiveness to discovery requests and for privilege. Different document review platforms can assist in many tasks related to this process, including the rapid identification of potentially relevant documents, and the culling of documents according to various criteria (such as keyword, date range, etc.). Most review tools also make it easy for large groups of document review attorneys to work on cases, featuring collaborative tools and batches to speed up the review process and eliminate work duplication.
  6. Production
    Documents are turned over to opposing counsel, based on agreed-upon specifications. Often this production is accompanied by a load file, which is used to load documents into a document review platform. Documents can be produced either as native files, or in a petrified format (such as PDF or TIFF), alongside metadata.

Security Incident Management
The NIST have divided the incident response into the following four steps :
  1. Preparation
  2. Detection and Analysis
  3. Containment, Eradication and Recovery
  4. Post-incident Activity
But these steps are usually divided into eight steps to have a better view of the incident management.
  1. Preparation
    It's what the company or organization have done to train the team and users, to buy the right software, configured the log collector, IDS/IDS, everything that could help to detect and handle the incident. The checklist to handle the incident is also part of the preparation.
  2. Detection
    Also called identification phase is the most important part of the incident management. The detection phase should include an automated system that check the logs (that should be be centralized in a SIEM). The users's awareness about security is a great point too. Time is an important point.
  3. Response
    Also called containment is the phase where the team interact with the potential incident. First step is to contain the incident by preventing it to affect others systems.
    Depending of the situation, the response can be to disconnect the network, shutdown the system, isolate the system (by firewalling or just avoiding anyone to work with the affected system). This phase typically starts with forensically backing up the system involved in the incident. Volatile memory capturing and dumping is also performed in this step before the system is powered off.
    Depending of the criticality of the affected systems, the production can be heavily affected or maybe even stopped, it is important to have the management's approval. The response team will have to update the management on the importance of the incident and the estimated time to resolution.
  4. Mitigation
    During this phase, the incident should be analyzed to find the root cause of the incident. If the root cause is not known, the restoration of the systems may allow the incident to occur again. Once the root cause is known, a way to prevent it to happen again must be applied, the systems can then be restored or rebuild from scratch, to a state where the incident can't occur again. One of the important part of this phase is to prevent this incident to happen on others systems. Changing the firewall rule set or patching the systems are often a solution.
  5. Reporting
    This phase start at the detection and finish with addition of the incident to the knowledge base of the team. The reporting can take multiple form depending on the public of the communication.
    • For the non-technical people of the organization, a formated mail explaining the problem without very technical problem and most important the estimated time to recovery. If the users have to take action (for example close the mail client), it should be explained (screencapture, etc) so the persons not familiar with computer can do it.
    • For the technical team, the communication should include details, estimated time to recovery and maybe involve the related team in the resolution on the incident. Bridge call may have to be created.
    Depending of the criticality of the incident, the management should be involved in the reporting. If the users have to leave the building quickly for example, the users may not take the request seriously if it come from a not known IT technician.
  6. Recovery
    During this phase, the system is restored, or reinstalled, rebuild, etc. The business unit responsible of the system only have the ability to decide when the system should go back online or in production. Depending of the actions taken during the mitigation, it's possible an infection persist in the system (if it was not rebuild from scratch or restored but just remove the infection from the system anti-virus for example) so a close monitoring should be applied on the system.
  7. Remediation
    This phase is done during the mitigation phase. Once the root-cause analysis is over, the vulnerabilities should be mitigated. Remediation start when the mitigation ends. If the vulnerabilities are present in the system's recory image, a recovery image should be generated with the fix applied. All the system not affected by the incident but vulnerable should be patched, etc. The remediation phase should make the vulnerabilities that have caused the incident to not be able to infect any system in the organization.
  8. Lessons Learned
    The phase is the most neglected one but it can prevent a lot of incident to happen and accelerate the resolution of similars cases. The incident should be added in a knowledge base, the step taken should be documented, if users or members of the response team needs training, it should be done. The Lessons Learned phase can improve a lot the Preparation phase because if a debriefing is done after each (or each important at least) incident, the team

Configuration Management System

CMS is a systems engineering process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life.
CMS ca also be used for the following purpose :

Configuration Management Process

Configuration Management Process usually involves the three following steps :

  1. Baselining
  2. Patch Management
  3. Vulnerability Management

Change Control / Change Management Process


Change control within information technology (IT) systems is a process—either formal or informal—used to ensure that changes to a product or system are introduced in a controlled and coordinated manner. It reduces the possibility that unnecessary changes will be introduced to a system without forethought, introducing faults into the system or undoing changes made by other users of software.
The goals of a change control procedure usually include :
  1. Have all the change reviewed by management
  2. Minimal disruption to services
  3. Communication for disruption to services
  4. Control that the change have a rollback
  5. Reduction in back-out activities
  6. Cost-effective utilization of resources involved in implementing change

This is the steps included in the Change Management Process.

  1. Request the change
  2. Review the change
  3. Approve/Reject the change
  4. Test the change
  5. Implement the change
  6. Document the change
Request Control process provides an organized framework within which users can request modifications, managers can conduct cost/benefit analysis, and developers can prioritize tasks.

Backup

Difference between following types of backup strategies : Recapitulatif Table :
Backup StrategyBackup SpeedRestoration SpeedSpace takenNeeded for recoveryClear the backup bit
FullSlowFastBigLast Full backupYes
DifferentialMediumMediumBigLast Full backup + Last differentialNo
IncrementalFastSlowSmallLast Full backup + All incremantal since last full backup.Yes

RAID and is a set of configurations that employ the techniques of striping, mirroring, or parity to create large reliable data stores from multiple general-purpose computer hard disk drives. RAID 0 (striping), RAID 1 and its variants (mirroring), RAID 5 (distributed parity), and RAID 6 (dual parity).

Electrical Power is a basic need to operate all the today's business. This is the kind of problem you can encounter with commercial power supply : Noise can occur on a cable : You can mitigate the risk by installing a UPS. UPS have a limited power and can send power to the systems for a short period of time. To be able to have power for days, a diesel generator is needed.

Open Source Intelligence is the gathering of information from any publicly available resource. This includes websites, social networks, discussion forums, file services, public databases, and other online sources. This also includes non-Internet sources, such as libraries and periodicals.

DRP - BCP

DRP is focused on IT and it's part of BCP.
There is 5 methods to test a DRP :
  1. Read-through, where all the involved people read the plan. It help find inconsistency, errors, etc.
  2. Structure walk-through (also known as table-top exercise) where all the involves person role play their part, by read the DRP, following a scenario.
  3. Simulation test is when the team are asked to give a response to a virtual disaster. The response is then tested to check if it's valid.
  4. Parallele test is where the DRP is tested for real. If there is a second site, it is activated, etc. The parallele test should never impact production.
  5. Full interuption test is when the production is shutdown to test the DRP. It's rarely done due to the heavy impact on production.
Type of DR site :

Business Continuity Planning

BCP is the process of ensuring the continuous operation of your business before, during, and after a disaster event. The focus of BCP is totally on business continuation and it ensures that all services that the business provides or critical functions that the business performs are still carried out in the wake of the disaster.
BCP should be reviewed each year or when significant change occur.
BCP have multiple steps :
  1. Project initiation is the phase where the scope of the project must be defined.
    • Develop a BCP policy statement.
    • The BCP project manager must be named, he'll be in charge of the business continuity planning and must test it periodically.
    • The BCP team and the CPPT should be constitued too.
    • It is also very important to have the top-management approval and support.
    • Scope is the step where which assets and which kind of emergency event are included in the BCP. Each services of the company must be involved in this steps to ensure no critical assets are missed.
  2. BIA differentiates critical (urgent) and non-essential (non-urgent) organization functions/activities. A function may be considered critical if dictated by law. It also aims to quantify the possible damage that can be done to the system components by disaster.
    The primary goal of BIA is to calculate the MTD for each IT asset.
    Other benefits of BIA include improvements in business processes and procedures, as it will highlight inefficiencies in these areas.
    The main components of BIA are as follows:
    • Identify critical assets
      • At some point, a vital records program needs to be create. This document indicate where are located the business criticals records and the procedures to backup and restore them.
    • Conduct risk assessment
    • Determine MTD
    • Failure and recovery metrics
  3. Identify preventive control
  4. Recovery strategy
    • Create a high-level recovery strategy.
    • The systems and service identified in the BIA should be prioritized.
    • The recovery strategy must be agreed by executive management.
  5. Designing and development, IT contingency Plan
    • It's the step where the DRP is designed. A list of detailed procedure to for restoring the IT must be produced at this stage.
  6. Implementation of DRP, training, and testing
  7. BCP/DRP maintenance

Or in short :

  1. Develop a BCP policy statement
  2. Conduct a BIA
  3. Identify preventive controls
  4. Develop recovery strategies
  5. Develop an IT contingency plan
  6. Perform DRP training and testinf
  7. Perform BCP/DRP maintenance

Misc

Type 1 Hypervisor are VM hypervisor wheere the OS is installed directly on the barebone machine. They perform better.
Type 2 Hypervisor are application installed in an OS, like Linux or Windows. They are called hosted hypervisor, they perform slower than type 1 hypervisor because the OS have to translate each call.

Tripwire is a HIDS.

NIPS is like an IDS, but it's installed inline in the netwwork. It can modify network packets or block attack.

IACIS is a non-profit, all-volunteer organization of digital forensic professionals. The CFCE credential was the first certification demonstrating competency in computer forensics in relation to Windows based computers.

CFTT is a project created by the NIST, to test and certifiate forensics equipements.

Software Escrow Agreement allow the customer to have access to the source code of a software if the vendor is stop the support of an application or is out of business.

8 - Software Development Security

Nonfunctional Requirements define system attributes such as security, reliability, performance, maintainability, scalability, and usability.

Life cycle of a project using a 5 phases SDLC :
  1. Initiation
    In this first phase, problems are identified and a plan is created.
  2. Acquisition and development
    Once developers reach an understanding of the end user’s requirements, the actual product must be developed.
  3. Implementation
    In this phase, physical design of the system takes place. The Implementation phase is broad, encompassing efforts by both designers and end users.
  4. Operations and maintenance
    Once a system is delivered and goes live, it requires continual monitoring and updating to ensure it remains relevant and useful.
  5. Disposition
    This phase represents the end of the cycle, when the system in question is no longer useful, needed or relevant.

This is a more detailed SDLC, containing 13 phases :

  1. Preliminary analysis: Begin with a preliminary analysis, propose alternative solutions, describe costs and benefits, and submit a preliminary plan with recommendations.
    1. Conduct the preliminary analysis: Discover the organization's objectives and the nature and scope of the problem under study. Even if a problem refers only to a small segment of the organization itself, find out what the objectives of the organization itself are. Then see how the problem being studied fits in with them.
    2. Propose alternative solutions: After digging into the organization's objectives and specific problems, several solutions may have been discovered. However, alternate proposals may still come from interviewing employees, clients, suppliers, and/or consultants. Insight may also be gained by researching what competitors are doing.
    3. Cost benefit analysis: Analyze and describe the costs and benefits of implementing the proposed changes. In the end, the ultimate decision on whether to leave the system as is, improve it, or develop a new system will be guided by this and the rest of the preliminary analysis data.
  2. Systems analysis, requirements definition: Define project goals into defined functions and operations of the intended application. This involves the process of gathering and interpreting facts, diagnosing problems, and recommending improvements to the system. Project goals will be further aided by analysis of end-user information needs and the removal of any inconsistencies and incompleteness in these requirements. Due Care should be done in this phase.
    A series of steps followed by the developer include:
    1. Collection of facts: Obtain end user requirements through documentation, client interviews, observation, and questionnaires.
    2. Scrutiny of the existing system: Identify pros and cons of the current system in-place, so as to carry forward the pros and avoid the cons in the new system.
    3. Analysis of the proposed system: Find solutions to the shortcomings described in step two and prepare the specifications using any specific user proposals.
  3. Systems design: At this step desired features and operations are described in detail, including screen layouts, business rules, process diagrams, pseudocode, and other documentation.
  4. Development: The real code is written here.
  5. Documentation and common program control:The way data are handled in the system, or the log are generated, etc are documented.
  6. Integration and testing: All the pieces are brought together into a special testing environment, then checked for errors, bugs, and interoperability.
  7. Acceptance: The system is tested by a third party. The testing include functionality test and security test.
  8. Testing and evaluation controls:Create guideline to determine how the system can be tested.
  9. Certification: The system is compared to a functional security standards to ensure the system complies with those standards.
  10. Accreditation: The system is approved for implementation. A certified system might not be accredited and an accredited system might not be certified.
  11. installation, deployment, Implementation: This is the final stage of initial development, where the software is put into production and runs actual business.
  12. Maintenance: During the maintenance stage of the SDLC, the system is assessed/evaluated to ensure it does not become obsolete. This is also where changes are made to initial software.
  13. Disposal: In this phase, plans are developed for discontinuing the use of system information, hardware, and software and making the transition to a new system. The purpose here is to properly move, archive, discard, or destroy information, hardware, and software that is being replaced, in a manner that prevents any possibility of unauthorized disclosure of sensitive data. The disposal activities ensure proper migration to a new system. Particular emphasis is given to proper preservation and archiving of data processed by the previous system. All of this should be done in accordance with the organization's security requirements.

Not every project will require that the phases be sequentially executed. However, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap.

The programing language have been classified by generation. WP
  1. First-generation language
    It's made of one and zero.
  2. Second Generation language
    It's the assembly. The language is specific to a particular processor family and environment.
  3. Third Generation language
    These languages includes features like improved support for aggregate data types, and expressing concepts in a way that favors the programmer, not the computer. A third generation language improves over a second generation language by having the computer take care of non-essential details. It's also using a compiler to translate the human readable code to a machine code. Sometime a runtime VM is used, like for C# and java.
    Fortran, ALGOL, COBOL, C, C++, C#, Java, BASIC and Pascal are 3rd generation language.
  4. Fourth Generation language
    This generation is for language that are done for a specific set of problem or task. Mathlab is made to work in the mathematic field. The different flavor of SQL are done to interact with DataBase. XQuery is made for XML.
  5. Fifth Generation language
    While fourth-generation programming languages are designed to build specific programs, fifth-generation languages are designed to make the computer solve a given problem without the programmer. Fifth-generation languages are used mainly in artificial intelligence research.
    OPS5 and Mercury are examples of fifth-generation languages.

In software engineering, coupling is the degree of interdependence between software modules (if a module (or an object) depend heavily on another module/object. Low coupling mean changing something in a class will not affect other class.); a measure of how closely connected two routines or modules are; the strength of the relationships between modules. Coupling is usually contrasted with cohesion (if an object/module implement a lot of unrelated functions. High cohesion mean an object/module implement only related functions) Low coupling often correlates with high cohesion, and vice versa.

Consistency in database systems refers to the requirement that any given database transaction must change affected data only in allowed ways. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof.
Cardinality refers to the uniqueness of data values contained in a particular column (attribute) of a database table. The lower the cardinality, the more duplicated elements in a column. For example, ID should be unique, so ID have a high cardinality. A column Gender can only accept Male or Female have a low cardinality.
Durability indicate that once a transaction is committed, it's permanently, it'll survive any crash or poweroff of the DB's host. The transaction is written to the disk and in the transaction log. For example, in a garage's DB, if the system indicate to the buyer that he successfully buy a car, the car will remain bought by the owner even if the DB encounter a power outage.
Data Dictionary is a data structure that stores metadata, i.e., (structured) data about information. If a data dictionary system is used only by the designers, users, and administrators and not by the DBMS Software, it is called a passive data dictionary. Otherwise, it is called an active data dictionary or data dictionary.

Test Coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. To calculate the test coverage, the formula is Number of use cases tested / Total number of use cases . For example, a program with 100 use cases that have 80 use cases tested : 80 / 100 : 0.8 . multiplie it by 100 to obtain the % . 80% in our case.

Negative Testing is a method of testing an application or system that ensures that the plot of the application is according to the requirements and can handle the unwanted input and user behavior. Invalid data is inserted to compare the output against the given input. Negative testing is also known as failure testing or error path testing.
Boundary test are done during negative testing, it consist of send 101 for example to an entry that requiere a number between 0 and 100. When performing negative testing exceptions are expected. This shows that the application is able to handle improper user behavior. Users input values that do not work in the system to test its ability to handle incorrect values or system failure.

CRUD testing Create, Read, Update, and Delete (CRUD) are the four basic functions of persistent storage. CRUD testing is used o validate the CRUD is functioning.

Heap Metadata Prevention is a memory protection that force a process to fail if a pointer is freed incorectly.

Pointer Encoding is a buffer overflow protection recommended by Microsoft during Software Development Lifecycle for Independent Software Vendor, but it's not Required.

Buffer Overflow and pointer protection according for Independent Software Vendor recommendations from Microsoft SDL

NameRequiementPriority
Pointer EncodingNoModerate
ASLRYesCritical
Heap Metadata ProtectionYesModerate
DEPYesCritical

Hardware Segmentation is a memory protection that maps process in different hardware memory location.

Defect Density is a development that determine the average number of defect per line of code.

Risk Density is a secure development metric that determine that rank security issues in order to quantify risk.

Inference is the ability to deduce sensitive information from available non-sensitive informations. For example deducing patient's illness based on that patient's prescription.

Aggregation is combining benign data to reveal potential sensible information.

Software Development Methodologies

Processor Mode

Processor have different mode of execution.

Misc

Data Warehouse is the process of collecting large volume of data on a high performance storage.
Data Mining is the process of searching large volume of data for patterns.

Index

Write a JS function that will point to the place where the string match

Resources


































Every force you create has an echo. Your own bad energy will be your undoing.