Lecture 1 - 6/9

       What We Protect (INFORMATION):

o    A knowledge concerning any objects, such as facts, events, things, processes or ideas, which have a special meaning in certain contexts.

o    It assumes that there is a fact whis is known (an object), and the person who knows the fact (the subject)

o    Information itself does not have the practical shape. The practical shape of information will occure when we also consider the practical representation of information (and then it is called - data)

       What We Protect (DATA): o Datum is singular, data is plural.

o     Re-interpretable formalized representation of an information in such a form which is suitable for transfer, processing and/or interpretation.

o     Data are always the presentation of information, usually in a pre-agreed form (which allows to transfer the information beared by the data from one subject to another). o The same data can be interpreted differently by the different subjects having a different background.

       Document/Data Set and Format:

o    Any information in computers (IT equipment) is always presented in digital form in certain pre-agreed formats as datasets (files) that carry information.

o    Digital data set:

  Presentation of information as a bit sequence, i.e. a sequence of symbols 0 and 1. (these sets are often referred to as files). o Format/Vorming:

  Rule for interpreting data as information, especially as an actual type of information (text, image, sound, video etc).

  The pre-agreed format give the data set/document its meaning

  In the context of computers and digital data, the format is actually a rule how information is presented in digital form

  Different data formats are usually supported by a different application software means (rakendustarkvara) which allow to write the file in the certain format, or to made the content of data (information) human-perceptable.

  There's a lot built into data formatting that's not perceivable to the end user.

  A typical end user usually receives only an human-perceptable form, prepared by the software, so-called WYSIWYG (What You See Is What You Get/adekvaatkuva).

       Security of Data vs IT Assets/Systems:

o Security of data (security of information beared by the data) is ensured by the securing the (IT) assets surrounding the data. o External Environment ( IT Assets ( DATA ) IT Assets ) o IT Assets/Systems Include:

  IT equipment

  Data communication channels             Software     Must include:

  organization (its structure and operation)

  Personnnel

  Data carriers (incl. documents)

  Infrastructure (buildings, offices, etc.)

o Main Properties of Digital (IT) Assets:

  Hard to Measure Value:

  A great but indirect value of data/information is that it's very hard to measure how valuable it is.

  Portabiliity:

  Data which can be stored by the very small and easily movable carriers can possess a huge value for our business process.

  Possibility of Avoiding the Physical Contact:

  The physical and virtual structures are usually very different. Both must be secured.

  Disclosure of Security Losses:

  Automatic or quick notice or disclosure of security losses, especially for integrity and confidentiality losses.

       Standard Model of Security Harming:

o Threats (ohud):

  Threats exploit the vulnerabilities (nõrkused, turvaaugud) of IT assets or components of IT system to in order to influence the data.

  Threats with co-influence the vulnerabilites will determine the risk or security risk (risk, turvarisk) and the probability that the risk will occur.

  When a certain risk is realized, there will appear a security loss or security breach or security incident (turvakadu, turvarike, turvaintsident).

  In order to minimize the risks there’s necessary to minimise vulnerabilities using safeguards of security measures (turvameetmeid) with our IT assets.

  The same principals apply to all three branches of the CIA Model (availability, integrity, and confidentiality.)

  IT Assets and systems must be modified to address vulnerabilities and potential threats if the risk is sufficiently high.

o Hardming of Security o Threat o Vulnerability/Security Hole o (after vulnerability) influence:      Assets

                                            Damage

       Security Concepts:

o Threat:

  (oht) A potential external-influenced harm of information security. o Vulnerability:

  (nõrkus, turvaauk) The property of each IT asset (component) from the point of view of external threats.

o Risk:

  (risk) The probability that an actual threat can exploit the certain vulnerability and will realise. o Security Loss:

  (turvakadu) an event when the security (availability, integrity and/or confidentiality) of some IT asset(s) will be harmed.

  Security loss is a realized risk.

  Examples of Security Loss:

  Failure of equipment – integrity loss of IT asset

  Theft of equipment – availability loss of IT asset

  Unauthorised modifying of register – integrity loss of data

  Destroying of office rooms by fire – availability loss of infrastructure

  Wiretapping of non-encrypted data cabels – confidentiality loss of data

o Safeguard of Security Measure:

(turvameede) a modification of IT asset(s) which will minimise the risk(s) (the rate of vulnerabilities of asset(s))

       Organizational Security:

o    Principle: in order to protect data used in a business process (information assets), data security must be handled by the organization or organizations involved in the entire business process.

o    Therefore, the data security and general security are often no longer distinguished and viewed together as a whole. o Securing of data versus securing of processes:

  Nowadays, as a rule, security is strongly integrated into business processes (main processes) since IT (together with data) is usually only a means for the operation of a business process.

  Therefore - security requirements originate from the needs of business processes.

  Additionally we can use avalability, integrity and confidentiality also for describing of business processes (besides data).

  Availability - the business process actually works (by a planned efficiency)

  Integrity - the business process works in proper way (details and properties of processes are correct)

  Confidentiality – neither the details of business process or the data used inside the process must be accessible only to certain users (groups of users)

o Security and (planned) Residual Risk:

  It does not matter how many safeguards we implement, we NEVER achieve the absolute security.

  If we implement more safeguards we only minimize the probability that security (availability, integrity or confidentiality) will be harmed, but it will never fall into zero.

  Instead of absolute security there’s usually the concept of acceptable residual risk by the business process (äriprotsessi jaoks aktsepteeritav jääkrisk).

  An acceptable residual risk is a situation where the total price of all implemented safeguards is approximately equal to the forecasted total loss of security (measured by the amount of money). o Obstacles of Evaluating the Optimal Security Points:

  Both graphics are hard to predict or estimate.

  We do not know the exact expenses of the all safeguards (they will change over times).

  Even less we can estimate the graph of damages – we do not have the actual data of threat frequences and their impact for all IT assets

  Even we have all this estimation data, the exact calculation (quantitative risk analysis) is very time-consuming prosess.

  There are hundreds of different IT assets, thousands of threats, thousands of vulnerabilities (and all of them must be taken account together)

o Necessity for a Risk Management Techniques:

  In order to simplify a practical security task it’s usually necessary:

  To Standardize Different Security Levels:

  I.e. different availability, integrity and confidentiality levels

  To create a system which is able to determine standardised actions (safeguards):

  To create a system which is able to determine standardised actions (safeguards), for different security levels, which result ensures us to approximately achieve the optimum point (to archieve the acceptable residual risk situation.

       Essence of Information Security:

o    If we possess (or process) the data then the information carried by the data always has a certain value for us (for our business process).

o    It does not depend either the infomation is represented by the digital nor by the paper-based data

o    Information security is a discipline concerning the maintaining these values/properties of information (performed in practice by the maintaining the properties of data)

o    Components of Information Security (CIA Model):

  These three properties (branches) of security must be maintained for all information/data items we possess.

  In pre-computer world (paper-based information) we considered only the confidentiality, not other branches

  (3) Information Confidentiality (konfidentsiaalsus):

  Confidentiality is the third most important branch of data security.

  Data confidentiality (andmete konfidentsiaalsus ehk salastatus ehk salastus) is the availability of the information, carried by the data, only by the authorized subjects (and strict non-availability for other subjects).

  Examples:

  State or corporative secrete being disclosed.

  Operational intelligence information will be disclosed.

  Personal data will be spread without the permission of the data subject.

  (2) Information Integrity (terviklus):

  Integrity is the second most important branch of data security

  Data integrity (andmete terviklus) is a ensuring that data (information was stored into the data) are originated by a certain source and haven’t been altered (both by an accident or by a deliberate act or by the fake).

  In the business process we usually assume that the data we used (information carried by the data) are firmly related to the creator/source of the data, creation time etc.

  Violation or absense of these relationships will usually causes serious negative consequences

  (1) Information Availability (käideldavus):

  Availability is the most important component of data security

  The worst thing which must be happened is that data are no more available for the subjects which need them during business process (maybe destroyed forever) Examples:

  Border guard does not have the list of fugitives, or the list isn't up-to-date

  National Board of Land does not know who possesses the concrete plot of land

o Four Different Concepts:

  Information Security (infoturve)

  Information Protection (infokaitse)

  Data Security (andmeturve)

  Data Protection (andmekaitse)

  If we protect only digital data, then it can be additionally also called a cyber security (küberturve).

  Just because we're dealing with cyber security doesn't mean we can disregard physical and paper data.

  This is largely a question of culture and tradition, which term we use.

  For example, in Europe, data protection often means the protection of personal data (isikuandmete kaitse). Lecture 2 (Threats) - 13/9

       Security Threats:

o    A threat (oht) is an external potential violation of (information) security.

o    A threat is always considered an external influence.

o    Can be:

  A potential violation of availability

  A potential violation of integrity

  A potential violation of confidentiality o Classification of Threats:

  By the Harmable Goal (availability, integrity, confidentiality)

  By the Source (by which the subject the potential harm is caused)

  By the Type of IT Asset (being harmed)

  By the Importance of (Potential) Damage (How big will it be) o Threats Classification by the Source:

  Spontaneous or accidential threats (stiihilised ohud):

  Environmental threats (keskkonnaohud)

  Technical failures and defects (tehnilised ohud ja defektid) human threats and failures (inimohud)

  The force majeure (vääramatu (looduslik) jõud), which can be both occasional (lightning, flooding) or regular (wearing, material fatigue, contamination etc).

  Classical Environmental Threats:

  Lightning

  Fire

  Flooding

  Inappropriate temperature and humidity

  Dust and contamination

  Electromagnetic pertubations

  Mis- or non-operability of external infrastructures    Classical Technical Failures an Defects:

  Accident in IT infrastructure

  Hardware defects

  Failures and disturbances of connection lines (network)

  Defects of data carriers

  Defects and failures of security means (devices)

  Human failures (inimvead) which can caused by inadequate skills, negligence, mis-management, environmental factors etc:

  Loss of Staff (inimkaod):

  Illness

  Death

  Strike

  Occasional Events (juhuslikud äpardused):

  Mistakes during work operations

  Erasing and/or destroying of data/device by an accident

  False line connections

  Deliberate acts of attacks (ründed, mitte rünnak):

  Characterized by clearly intentional human activity (selge tahtlik (inim)tegevus).

       Attacks:

o    Attacks or deliberate acts (ründed) are always based on humans who make a certain intended or deliberate action (sihilik tegevus) to harm the security goals (lead by a personal interest, private or state intelligence, hooliganism etc.).

o    Usually classified by the:

  Attack Sources:

  Authorized users of IT systems (the most important source):

  Motives:

  Providing illegal (financial) profit

  Revenge of fire people

  Political/Ideological        Intelligence:

  Economical, state-based, military, etc. agents

  Crackers:

  Often mis-called hackers (kräkkerid, häkkerid) an increasing factor

  Criminals Channels of Attack:

  Instant contact with an attackable object (IT component/device, personal, infrastrcture etc.)

  Networks, the most common channel of attack.

  Portable data carriers, where historically important, but in recent years are becoming increasingly relevant again.

  Attacking Methods:

  Physical Attacks:

  Mainly harm the availability and integrity Classical branches:

  Physical attack to infrasctructure

  Vandalism

  Unauthorized        entering       to house/rooms/territory

  Theft

  Manipulation or destruction of IT equipment or devices             Misuse of Resources:

  Mis-use of resources (ressursside väärkasutus) may harm all goals of security - availability, integrity and confidentiality.

  Resourse misuse threat is extremly great during the conversion, maitenance, repairing and/or upgrading tasks performed by the external parties.

  Blocking of Resources:

  Blocking of resources (ressursside blokeerimine) harms mainly the availability

  In most of cases it means the blocking (denial) of services (teenusetõkestusrünne), for example:

  Overloading of the network

  Mass-execution of tasks            Filling of all possible data space           Interception (eavesdropping):

  Interception (infopüük), often also called to eavesdropping, is an attack to confidentiality by any unauthorized subject Voice interceptions:

  Hidden microphones, taking advantage of smartphone or computer microphone Interception of telephone calls:

  Interception of wires where communication is unencrypted.

  Unauthorized reading or copying of stored data     Reaching of residual information Eavesdropping of wires:

  With the     analyzing     the     eavesdropped information with special equipment/software

  Inappropriate deleting of data or destructing of data carriers with the subsequent unauthorized reading

  Fabrication:

  Fabrication (võltsing), sometimes called also faking is the entering of faked items into system. Harms mainly integrity

  Playback of an earlier recorded messsages Masquerade attack:

  (teesklusrünne) - equipping of messages with false requisites (name, user name password, money amount etc.)

  Social Engineering (suhtlemisosavus)

  Denial (salgamine)          System manipulation:

  Manipulation (manipuleerimine) is the unauthorized changing of IT system. Harms mainly integrity, but also other goals

  Manipulation of data or software (false data, unauthorized changing of access rights or

functionality)

  Manipulation of lines

  Manipulations of data      during transfer       (via vulnerabilities)

  Attack via service ports (when they're insufficiently secured)

  Attacks to security mechanism:

  Can harm all three goals of security. Harming level of depends of a concrete mechanism or/and architecture

  Main attacking objects are often authentication systems and cryptosystems, for example:

  Systematic guessing of passwords

  Theft of passwords via a keylogger

  Interception of a PIN-code

  Practical cryptoanalysis of cryptoalghorithm or protocol

  Attacking software or malware:

  Legal products with documented features    Malware (pahavara, kurivara):

  Classic types of malware:

  Logical bomb (loogikapomm)

  Trojan (trooja hobune)

  Worm (uss)            Virus (viirus)

  Dropper (pipett):

  A program which installs a virus of Trojan

  Special programs for attacking different security mechanisms

  Attackable Objects

Lecture 3 (Vulnerabilities, Properties of Digital Data) - 20/9

       Cannot change the threats (external factors), only the IT asseets (internal factors).

       Vulnerabilities:

o Four Main Classes:

  Infrastructure Vulnerabilities:

  Unfavorable (physical) location of a protectable object

  Primitive or depricated infrastructure

  Doesn’t allow to implement several safeguards

(mainly physical and IT-related sefaguards)            IT Vulnerabilities:

  Limited resources

  False installation of equipment or connection lines

  Errors, defects, or undocumented features of software or hardware

  Shortcomings of protocols (including communication protocols)

  Shortcomings of data management

  Inconvenience of safeguards

  Safeguards can’t   heavily        impair         a        normal work (normal availability)       Personel-related Vulnerabilities:

  Addressed with personel protocols and training Incorrect procedures:

  Often due ot ignorance or convenience, often systematic

  Ignorance and lack of motivation:

  As a rule extends to all employees of the organization

  Ignoring the safeguards:

  Both intentionally and negligently     Organizational Vulnerabilities:

  Deficiencies of work organizaiton:

  Rules, adapting to new circumstances, etc.

  Shortcomings of resource management:

  Computers, maintenance, testing, storage media, etc.

  Incomplete documentation:

  IT equipment, imperfectnesses of safeguard selection, communication lines, etc.)

  Incomplete Implementation:

  Safeguards are implemented incorrectly, in the wrong place, etc.

  Shortcomings of safeguards management: Monitoring, audit, etc.

       Paper-Based Data Security:

o    Most paper documents don't feature any type of cryptography, unlike digital data.

o    Availability is ensured by an appropriate preservation of data (conditions!) and by using suitable handling procedures (from people to people, record management rules)

o    Integrity is ensured by the physical shape of a document - data must be transferred to the paper sheet by the permanent method, document is equipped with handwritten signature of the creator

o    Confidentiality is ensured by the storing and transporting of document in a secure way

       The Role of Cryptograph:

o    Encryption or enciphering (krüpteerimine, šifreerimine) is a technique where data are converted to the certain non-readable form. The converting process usually uses a special amount of data which are usually kept secret – a key (võti)

o    Basic Technique can be Used for Two Purposes:

  For Ensuring Confidentiality:

  Without the key, it's practically impossible to get the iformation beared by the day, as long as it was properly encrypted.

  For Ensuring Integrity:

  Without a special private key it’s impossible to change the data without the notice.

       Availability of Digital Data:

o    Regular backups

o    Appropriately working IT systems o An appropriate digital record management system o Transmitting of data via data networks (the Internet)

       Integrity of Digital Data:

o To tie the data carrier and data stored to it permanently together (Mostly theoretical):

It excludes all network-based application (and a good e-world) and is used very seldom

o To use a client-server technique and such a IT system able to logging who has created/changed different data:

  Mass-used, but has a very harmable security o To use e-signature or digital signature (digisignatuur, digiallkiri) in order to associate the digital data and their’ creator cryptographically (mathematically):

  It is a most secure way and an only way to use in enhancedsecurity (enhanced-integrity) systems

       Confidentiality of Digital Data:

o Two different approaches (used mixedly in practice):

  To store/transport the (uncrypted) data securely

  To encrypt the data and to handle the enrypted data as usual (public) data. Encrypting always adds an additional problem – a key management (võtmehaldus) problem

o If the confidential information are transferred via the common network (network which wires aren’t physically secured) then

the encryption should be always mandatory Lecture 4 (Safeguards) - 27/9

       Classificatations of Safeguards:

o    By the purpose (prevents threat, frightens attack, repairs defect etc) o By the influented security component/goal (availability, integrity, confidentiality)

o    By the mean of implementation or realisation (procedure, technical equipment, program, building construction etc)

o    By the strength of security

       Purpose of Safeguards:

o By the purpose the safeguards can by divided to:

  preventive safeguards (profülaktilised meetmed)

  identifying safeguards (tuvastusmeetmed)

  reconstructive safeguards (taastemeetmed) o Several safeguards are polyfunctional (for example error correcting code)

       Preventitive Safeguards:

o Preventive safeguards (profülaktilised turvameetmed) enable to prevent security incidents:

  to minimize vulnerabilities

  to prevent attacks

  to minimize security risk probabilities

  to decrease the influence of security incidents to IT assets

  to facilitate site (object) restoration o Preventive safeguards can be divided into three sub-categories:

  reinforcable safeguards (tugevdusmeetmed):

  Reinforcable safeguards (tugevdusmeetmed) will influence mainly the minimizing of security risk caused by the spontaneous threats

  Will consists of four main parts:

  order or systematicness (kord, süsteemsus)

  working conditions (töötingimused)

  preventive check (ennetav kontroll) security awareness (turvateadlikkus) Working Condition:

  Micro-climate        (temperature,         humidity,     air cleanliness)

  Ergonomic design and layout of workplace

  Corporartive social climate, positive human relations

  Corporative promotion and career principles             Preventitive Checks:

  verification and testing of IT products and security mechanisms

  regular        monitoring of       IT      security-related information

  test-attacks of safeguards and security mechanisms

  auditing of IT systems (by standard methodics)   Security Awareness:

  suitable choosing of employees

  regular training of employees

  regular (and irrregular) awaring events

  test alarms Examples:

  internal rules

  accurate job descriptions

  standards

  regular maintenance of infrastructure and facilities

  established procurement procedures

  documentation of equipment

  labeling of date carries and cables

  version management

  resource management

  security policies, security plans, security guidelines etc

  scaring safeguards (peletusmeetmed):

  Scaring safeguards (peletusmeetmed) minimize the probability of attacking attempts. Scaring influence is often a useful additional feature of a safeguard - the mere knowledge about safeguards often reduces the risk (especially in the cases where the expected yield for an attacker can’t compensate the risk) Examples:

  Different sanctions (legal, etc.)

  Warning signs on documents, data carriers, gates, walls, doors, etc.

  Visible safeguards (guard, camera, illumination) separative safeguards (eraldusmeetmed):

  Separative safeguards (eraldusmeetmed) fend off mainly the attacks. They usually defend all aspects of security

(availability, integrity, confidentiality) Three subcategories:

  Spatial isolation:

  Using different computers/data carriers/lines/rooms for data of different security levels

  Temporal Isolation:

  Using a computer at different times for a data of different level of security

  Using different software at different times on the same computer

  Confidential information is only stored when in use

  Logical Isolation:

  Logical isolation (loogiline isoleerimine) is the dividing of IT assets (for example: data) into the small elements that can be treated separately and/or grouped together Realization:

  access control (password protection, card lock etc)

  service broker (eg, firewall, database query processor)

  securing (encrypting,      hiding, destroying, erasing etc)

       Identifying Safeguards:

o By the minimization of security loss we tend towards the following goals:

    avoiding the security incident

    operative identifying the incident

    registrating the incident (and identifying it later)

    to proving the incident later o Identifying safeguards divisions:

    Operative Identification (operatiivtuvastus):

    Operative identification (operatiivtuvastus) involves methods which are able to identify security incidents as soon as they occur, and respond to them immediately Examples:

    guard, fire and security alarm, environmental monitoring etc

    warning       message       caused         by      the     blocked

(prohibited) operations, false authentication attempt etc

    debugging of software   Post-identification (järeltuvastus):

    Post-identification (järeltuvastus) bases on registration of the events and proving the security incident later by them Examples:

    logfiles of computers and lock systems

    several testing and diagnostic tools

    different methods of verification, auditing, testing etc

    Evidence-based Identification (tõendtuvastus):

    Evidence-based identification (tõendtuvastus) bases on several security elements (which are added to IT assets and data) enabling to check the integrity and/or confidentiality Examples:

    parity bit, checksum, cryptographic message digest

    digital signature and timestamp

    steganographic watermark

    physical security elements (visible or lownoticeable threads, seals, labels etc)

       Reconstructive Safeguards:

o    After a security incident there’s always necessary to restore the normal operability of a harmed object. It can be done as fast and to a greater extent as the more importance the object (IT asset) has for us

o    Main branches of reconstructive safeguards (taastavad turvameetmed) are:

  Backuping (varundamine):

  Backuping (varundamine) is the main and more important premise for an any restoring

  regular backup of data (once in a day, week etc)

  parallel (backup) computer system

  RAID hard disk system

  renovation/restoration (ennistamine):

  Renovation (ennistamine) involves the removing of faults, errors and defects

  repairing of IT equipment

  repairing and modifying of software using version management methods

  repairing of infastructures (cables, power supplies etc) removing of malware (viruses) with anti-virus software replacing (asendamine):

  Replacing (asendamine) must be prepared for the cases of non-repairable damages

  keeping some PCs and/or laptops in company’s stock

  rapid delivery agreements of IT equipment

  substituting plans of employees (for a cases of illness, vacation, death etc)

  backup office rooms (or readiness for rooms)

       Classification of safeguards by realization:

o Three classical branches: o Organisational safeguards:

  The most essential branch is organisational safeguards – without them any physical or IT-related sefaguards has no real influence Official Definition:

  Organisational       safeguards   (organisatsioonilised turvameetmed) include security administration, security system design, management and security incident handling activities and operations Four main components:

  activities that a certain person must do

  activities which are prohibited for a certain person

  things what happen when someone does something forbidden

  things what happen when someone doesn’t make necessary things

o Physical safeguards:

  Physical safeguards (füüsilised turvameetmed) involve:

  1. Infrastructure of a protectable object (kaitstava objekti taristu):

  structural barriers

  communications

  heating and air conditioning

  security doors and windows, gates etc

                                                             

2. Mechanical components:

  locks

  signs

  packaging labels

  Usually physical safeguards involve also guards, employees of entrance building etc

o IT-related sefaguards:

  IT-related safeguards (infotehnilised turvameetmed) are mainly used for a performing a logical separation and an identification of a security incident

  Two main branches of practical tools:

  Software-based access control to data and IT systems (incl. authentication techniques)

  Cryptographical means – for achieving of both confidentiality and integrity

Lecture 5 (Risk Management) - 4/10

       Main goal of risk management: to implement exactly such a set of safeguards, which lead a security risk (the significance of theats + and its realising probability through vulnerabilities) to the level of the accepted residual risk

       Necessity for a Risk Management Techniques:

o    To standardise different security levels i.e. different availability, integrity and confidentiality levels

o    To create a system which is able to determine standardised actions (safeguards), for different security levels, which result ensures us to approximately achieve the optimum point (to archieve the acceptable residual risk situation Main alternatives of Risk Management:

1.       Detailed Risk Analysis:

  An ideal case

  Detailed risk analysis is usually reasonable to implement only for a few critical information systems when we have sufficient

resources to perfom it (less than 1% of real systems)

  Advantages:

  Realistic overview of the situation

  A calculated residual risk is very likely the actual residual risk

  Systematic methodology takes into account all possible vulnerabilities and threats

  Disadvantage:

  Detailed risk analysis is an extremely resourceconsuming process (work, time, money, specialists) Two Main Ways to Perform Detailed Risk Analysis:

  Quantitative Risk Analysis:

  Based on the calculation of quantitative values (often measured by the amount of money, other

units are often reduced to money

  Advantage:

  If we have the actual data for the vulnerabilities and the threat's frequency, it's always possible to calculate the actual residual risks correctly in regards to money and probablity of realization     Disadvantage:

  Very large amount of work

  The actual information about threats and vulnerabilities is usually missing or incorrect Typically involves five classical phases:

1.                 Detailed specification of all IT assets

2.                 Specification of all threats and their’ realizing frequency

3.                 Evaluating of all vulnerabilities of all IT assets by the amount of money necessary for a performing a successful attack

4.                 Calculation of co-influences of all vulnerabilities and threats in the probability units for all IT assets

5.                 Actual risk calculations, reduced to security risk (availability, integrity and

confidentiality risk) of all types data and other essential assets       Qualitative Risk Analysis:

  Based in the calculation of some pre-agreed relative values (scales)

  Instead of precise probabilities and money values we use several notional values and the coarse gradients and scales

  Two main properties:

  Also the known exact monetary values will be transferred to such a relative gradient/scale form

  Hardly measurable values should usually replaced by the empirical and subjective

(expert-based) evaluations of them       Evaluating the Threat's Influence:

  Enticement of an asset

  Easyness of transforming of an asset into amenity (money)

  Technical possibilities of a typical attacker

  Rate   of       utilization    of       (different) vulnerabilities

  Frequency of threat actual realization            Five Stages for Detailed Risk Analysis:

  Residual risk evaluating with the using of either qualitative or quantitative risk analysis methodology

  Finding areas where it’s necessary to reduce the residual risk

  Implementing appropriate safeguards in these areas

  Finding new residual risks with comparing them to accepted residual risk

  Repeating the above-mentioned procedure until we fit to the accepted residual risk limits

2.                 Baseline Approach:

  A convenient way in a lot of practical cases

  In the case of baseline methodology (etalonturbe metoodika) we have a given (fixed) set of mandatory safeguards for a certain (early determined) security level

  Baseline approach is a main and commonly used alternative of detailed security analysis for the cases of limited resources (used for 99% practical systems)

  Advantages:

  Less time and resources required than detailed risk analysis

  Same set of baseline safeguards is applicable to the different information systems

  Disadvantages:

  For a high reference security level we are forced to implement more safeguards than necessary

  For a low reference security level it gains too high (unacceptable for us) residual risks

  Information system         components with   an unique architecture may cause an enormous security risks

that we can’t take into the account       Elements of the Baseline Approach:

  All typical components of the typical information system (buildings, office, servers, hardware, software, communications, users, organization, access control, etc.) was taken into account as an hypothetical system

  The certain level of security was pre-defined

  The detailed risk analysis was implemented (once!) for above-mentioned circumstances. The result is a certain set of safeguards

  It is assumed that for any other information system the reaching of the same security level needs the implementing of same set of safeguards

3.                 Mixed Approach:

  Takes the best elements from both baseline and detailed risk analysis, combining them

  Takes the advantages from both detailed risk analysis and baseline approach combining them in order to find a reasonable compromize

  Advantages:

  Less resource-consuming than detailed risk analysis

  In comparison with baseline approach it enables to take into account the specific security goals (levels) determined by the protectable IT assets (protectable data)

  Disadvantages:

  In comparison with detailed risk analysis it gives the less realistic result

  In comparison with baseline approach it is a little more expensive

  Two Main Branches:

  Sets of baseline safeguards are prepared not only for a certain (single) security level but for different security levels (for different pre-defined availability, integrity and confidentiality levels)

  In mission-critcal and/or unique architecture components the detailed risk analysis was implemented (for other components we use widelyspread baseline approach)

4.                 Informal Approach:

  A real practical alternative to system (formal) approaches

  Informal approach (mitteformaalne riskihaldusmetoodika) is based on risk assessment by a non-abstract methods using the existing experience of specialists (own employees, external consultants)

  Advantages:

  No need to learn new skills or techniques (we just need good experts)

  Risk management process can be implemented with a smaller amount of resources (cheaper) than in the case of detailed risk analysis

  Disadvantages:

  If we disregard systemness, there’s always a great risk to leave some serious vulnerabilities or risks unnoticed

  Experience of internal/external experts both are always subjective and often absent (unsatisfiable) for some areas

  Expenses of safeguards isn’t sufficiently justified by us for company’s management

  Major problems will arise when some of experts terminate his/her duty

  It is a useful method when four following conditions are satisfied:

  Risk analysis must to be performed very fast

  We haven’t any suitable abstract risk assessment approaches or we can’t use them for some reasons

  Existing risk management methods are too resourceconsumable for us

  We have suitable experienced (IT) professionals Lecture 6 Cryptography

           Two Stages of Cryptography:

o Two completly different stages (eras) od cryptography can be distinguished:

1.     Pre-computer cryptography or traditional cryptography (arvutieelne ehk traditsiooniline krüptograafia). Has used paper-pencil or some simple mechanical devices (until 1940s). Was a tool only for military, diplomacy and intelligence areas (until 1970-80s). Has used empirical techniques (until 1949)

2.     Contemporary cryptology or computer-age cryptography, usually called only cryptography ((kaasaja) krüptograafia). Uses computers as encrypting/breaking tools (since 1940s). Is an essential tool for each e-systems (since 1980s). Uses scientificbased algoritms (since 1949)

       Traditional Cryptograph:

o Traditional or pre-computer cryptography (traditsiooniline ehk arvutieelne krüptograafia) was a discipline which aim was a hiding of information (hiding meaning of data) for foreign or alien people by the way of ”strange writing”

       Main Methods of Pre-Computer Cryptography:

o    Substitution (substitutsioon) – replacing of original characters (letters) by another characters (letters)

o    Transposition or permutation (transpositsioon, permutatsioon) – changing the order of characters (letters)

o    The simplest pre-computer (ancient) ciphers were different variants of substitution or transposition ciphers.

o    More complex ancient ciphers were certain combinations of substitution and transposition

       ENIGMA Machine: o Created by Germans in the 1930s.

o In 1939-40, a British matematician Alan Turing constructed a special electronic computer (first in world!) named BOMBE, which only aim was the breaking of ENIGMA ciphers

       The Essence and Role of Contemporary Cryptology:

o The aim of contemporary cryptology is not only confidentiality. The additional aim – the avoiding of unauthorized changes (integrity) was added. Ensuring of integrity should be considered the main function of contemporary cryptology (ca 80-85% of its total usage)

       Contemporary Cryptography — an Official Definition:

o (Contemporary) cryptology ((kaasaja) krüptograafia) is a discipline that embodies the principles, means, and methods for the transformation of data in order to

  hide their semantic content

  prevent their unauthorized use

  or

  prevent their undetected modification            Basic Concepts of (Contemporary) Cryptology:

o    Encryptable (convertable from readable to unreadable form) text is called plaintext (avatekst)

o    Encrypted text (the text which is already converted to unreadable form) is called ciphertext (krüptogramm)

o    The converting process from plaintext to ciphertext (from readable to unreadable form) is called encryption or encipherment (krüpteerimine, šifreerimine)


The converting process from ciphertext back to plaintext (beck to readable form) under normal circumstances is called decryption or deciphering (dešifreerimine)

o    In pre-computer (traditional) cryptoalgoritms the key was often undistinguishable from an algoritm itself

       Main Properties of Contemporary Cryptology:

o    Technical descriptions of all wide-spread cryptoalgoritms are usually public. All security usually bases on a secure key which is used in actual

(practical) cases o This allows to evaluate the algorthm’s security for a wide range of independent experts (without having access to real confidential data which needs a key)

o    In practice the security was usually evaluated by the cryptologists (krüptoloogid) who are usually deep matematicians by the education and specialization

       Three Main Practical Types of Cryptoalgorithms:

o Symmetric cryptoalgorithms or secret-key crypotoalgorithms:

  Are traditional (historical) cryptoalgorithms and are used for confidentiality purposes o Asymmetric cryptoalgorithms or public-key crypotoalgorithms:

  Are used for both integrity and key exchange purposes o Hashes or cryptographic message digests:

  Helps asymmetric algorithms for ensuring integrity

       Secret-Key Cryptoalgorithm:

o    Secret-key cryptoalgorithm (salajase võtmega krüptoalgoritm) or symmetric cryptoalgorithm (sümmeetriline krüptoalgorithm) is such a cryptoalgorithm where the same secret key is used both for enciphering and deciphering purposes

o    The minimal keylength of secure secret-key cryptoalgorithm should be

128 bits o The most-of-spread contemporary secret-key cryptoalgorithm is AES

(three variants with keylength of 128, 192 or 256bits)

       Role of Key in Enciphering and Deciphering Process:

o    Encrypting or encipherment (krüpteerimine, šifreerimine) needs the using of certain key as a pre-defined queue of bits

o    Opposite process is a decrypting or deciphering (dešifreerimine), which needs a same key in order to restore the initial data (plaintext) from the encrypted text (ciphertext)

       Public-Key Cryptoalgorithm:

o    Public-key cryptoalgorithm (avaliku võtmega krüptoalgoritm) or asymmetric cryptoalgorithm (asümmeetriline krüptoalgoritm) uses two keys – if we encrypt using one key, we can decrypt it by another key

o    These keys are mathematically related to each other but there’s impossible in practice to found from one key another

       Secret-Key Cryptoalgorithm: Fields of Use:

Four main areas:

  Transmitting     of       confidential information using some

(interceptable) networks

  Secure storing of confidential data (with an appropriate key management system)

  Secure erasing of confidential data (rewriting material)

  Creating of secure (independent, unpredictable) key material for cryptography usages

       Hash or Cryptographic Message Digest:

o Cryptographic message digest (krüptograafiline sõnumilühend) or cryptographic hash (krüptoräsi) is a digest with a fixed small lenght which is calculated from a message by some deterministic mathematical one-way function

       Theoretical and Practical Security:

o    As a rule, almost all practical crypto-algorithms have only practical security

o    Theoretically all of them are breakable within millions or billions of years

o    Theoretical security (teoreetiline turvalisus) is a situation where it’s impossible to break the cryptoalgorithm even with the help of huge amount computational resources (time, processors etc)

o    Practical security (praktiline turvalisus) is a situation where it’s impossible to break crytpoalgorithm with a reasonable amount of resources (usually by mainframe hosts less than some years)

       Essence of cryptanalysis:

o    Cryptanalysis (krüptoanalüüs) is a breaking of some mentioned five properties (demands) of an algorithm

o    A more trivial way for a cryptanalysis is a testing of all key combinations. This technique is called an exhaustive search

(ammendav otsing) or a brute force method o For a N-bit key we have 2N different key variants. For a big N it is a very huge number. Therefore, an exhaustive search is infeasible to perform since a certain value of N. The typical (lower) limit is 128 – it’s infeasible to perform 2128 or more operations in practice

o    A simplest way – an exhaustive search – is usually not considered to be a cryptoanalytic technique

o    A cryptoalgorithm is considered to be practically secure if we cannot perform an exhaustive search and there are no effective cryptoanalytic techniques available for all above-mentioned five types of attacks

       Security achieving ways for practical systems and solutions:

o A basic rule: if we increase keylenght by one bit, the security of algorithm (the amount of necessary comuptational resourses for breaking it) increases two times

This allows us by the linear growth of expenses to a cryptoalgorithm (computing time, CPU cost etc) to achieve the exponential increase in security (the exponential growth of resources necessary to break the actual algorithm)

Lecture 7 (Asymetric Cryptoalgorithms)

       Secret-Key Cryptoalgorithm: Fields of Use:

o Four main areas:

  Transmitting     of       confidential information using some

(interceptable) networks

  Secure storing of confidential data (with an appropriate key management system)

  Secure erasing of confidential data (rewriting material)

  Creating of secure (independent, unpredictable) key material for cryptography usages

       Most-Of-Spread Algorithms, General Principles:

o    AES (Rjindael) has won in 2001 a NIST commercial symmetric algorithms competition and is still an excellent (unbreakable and effective) algorithm

o    Secret-key (symmetric) cryptoalgorithm is considered to be practically enough secure when the keylength is at least 128 bits

o    AES (keylength 128, 192 or 256 bits). Is international de facto commercial standard since 2001, involves estimatedly 70-75% from all symmetric cryptoalgorithm usages

o    IDEA (keylenght 128 bits). Switzerland, late 1980s. Is miraculously still secure for such an old algorithm

o    FOX or IDEA NXT (keylength from 0 to 256 bits). Published in 2003, created by Junod and Vaudenay

o    Blowfish (variable keylenght up to 448 bits). Bruce Schneier, 1990s o Speck. Keylenght between 128 and 256 bits. Developed NSA as lowresource and optimal for use in software

o    Simon. Keylenght between 128 and 256 bits. Developed NSA as lowresource and optimal for use in hardware

o    RC4. Stream cipher, keylenght between 40 and 256 bits, from 1987.

Nowadays is being considered to be too weak o DES (keylenght 56 bits). Has been U.S. commercial standard from 1977 and was widely used in all around the world. Already 20 years to be considered to be waek and breakable

       Block and Stream Ciphers:

o    Symmetric cryptoalgorithm can be divided into block ciphers and stream ciphers. Block ciphers are much more spread than stream ciphers

o    Block cipher (plokkšiffer) is an enciphering method where plaintext is divided into the blocks of certain lenght and these blocks are encrypted separately. How and if the encryption result of one block is related from


the prevoius blocks, is determined by the block cipher mode, which is currently used

o    Stream cipher (jadašiffer) is a method where there is generated a key sequence (võtmejada) from a given secret key. Encryption process is an ordinary XOR operation between plaintext and key sequence

o    The most convenient but not sufficiently secure for a long plaintexts is an electronic codebook mode – each bit of a ciphertext depends only on one plaintext block

o    The most-of-used and sufficently secure mode is a cipher block chaining mode – each bit of a ciphertext depends on all previous plaintext

o    Feedback modes are less frequently used but they allow to use a block cipher as a stream cipher in order to produce the key sequence. Main usage area of them is secure erasing of a data from any rewritable media (disks, flash memory etc).

o    Block Cipher Modes:

  Electronic Codebook Mode, ECM (koodiraamatu režiim):

  Plaintext blocks are encrypted independently from each other using the same secret key

  Serious disadvatnage: each ciphertext block depends on only one plaintext block – will cause often repeats in ciphertext

  Cipher Block Chaining Mode, CBC (ahelrežiim):

  Before encrypting of the sequent block, the result of previous block was XORed to the plaintext

  Advantage: one block of ciphertext depends on all previous plaintext – no any repeats in ciphertext

  K-bit Cipher Feedback Mode, CFB (šifri tagasiside režiim)

  K-bit Output Feedback Mode, OFB (väljundi tagasiside režiim) o Cipher and Outbut Feedback Modes:

  Cipher Feedback Mode and Output Feedback Mode is the situations where there’s some kind of feedback is organized:

  for a cipher feedback mode the feedback loop involves both block cipher block and XORing

  for a output feedback mode the feedback loop involves only the cipher block which is recurrently started from a

certain value (using initial key)

o    Inner Structure of a Block Cipher:

  Block cipher block usually involves a numerous subsequent similar standard transformations of a plaintext called rounds (raund). Output of a previous round is an input to the next round How differents rounds use (generally different) keys is determined by a key sequence algorithm (võtmejaotusalgoritm). Key sequence algorithm may also be missing, in these cases all rounds use straightly the original key

  If such a key sequence algorithm exists, it comuptes from initial key the special round keys (raundivõtmed) for different rounds

o    Parameters of a Typical Block Cipher:

  Six important parameters:

  Lenght of a key

  Lenght of a block (sometimes is equal to keylenght, but sometimes it’s not)

  Number of rounds (and sometimes also the number of different round types)

  Presence (or absence) of key sequence algorithm

  Number of round keys (if key sequence algorithm exists, sometimes is equal to number of rounds sometimes it’s not)

  Lenght of round keys (sometimes it is equal to initial key, sometimes it’s not)

o    Main Basic Operations Inside the Rounds:

  Are the same as for pre-computer encrypting techniques:

  Substitution (substitutsioon) – replacing of original characters (letters) by another characters (letters)

  Transposition    or       permutation (transpositsioon, permutatsioon) – changing the order of characters (letters)

o    AES:

  AES has three different versions with different strenght (with different key lenghts)

  Is a block cipher with a block lenght of 128, 192 or 256 bits cosequently

  Uses a key which lenght is equal to the block lenght - consequently 128, 192 or 256 bits AES: Technical Description:

  For a 128-bit key involves 10 rounds, for a 192-bit key involves 12 rounds and for a 256-bit key involves 14 rounds

  Key sequence algorithm is missing (all rounds use straightly initial key) Each round consists of four subsequent different type of transforms:

  byte sub (asendusbaidi faas)

  shift row (ridade nihutuse faas)

  mix column (tulpade segamise faas) add round key (raundivõtme lisamise faas) AES: Cryptanalysis:

  Exhaustive search needs to performe a 2128 to 2256 operations – it is clearly infeasible in practice

  Effective cryptanalytic means are not known up to this time (the algorithm is practically secure)

  Authors of AES (Rjindael) have itself shown it for a most of cryptanalytic methods known in these times (in 1999). 24 years later the situation hasn’t been change by an essential way (despite of new cryptanalytic methods to be appeared)

  AES: a “Breaking Machine”:

  A ”Breaking Machine” is a parallel computer (hardware realization of an algorithm) which performes the exhaustive search where different key interval were searched simultaneously by the different chips

  The breaking machine which is able to break DES within one second, spend for AES breaking (128-bit key version) some thousands millions years

  The cost of such a machine is (AD 2022) about 20 thusands of euros or more

  AES: Realizations:

  There’s possible to realize fast AES both in hardware and software

Hardware realizations are hundreds of times faster (depends on chip-making techniques and used programming language)

  Both hardware and software realizations of AES can be used as "background“ activities, for example, the data reading/writing background activities

  Conclusion: all three versions of AES will probably remain practically secure for a next dozen of years Lecture 8 (Main Practical Types of Cryptoalgorithms) - 25/10

       Symmetric cryptoalgorithms or secret-key crypotoalgorithms o Traditional   (historical)      cryptoalgorithms   and    are     used   for confidentiality purposes

       Asymmetric cryptoalgorithms or public-key crypotoalgorithms o Are used for both integrity and key exchange purposes

       Hashes or cryptographic message digests:

o Helps asymmetric algorithms for ensuring integrity

       RSA:

o    Was invented by Rivest, Shamir and Adleman in 1978

o    RSA is considered to be practically secure with no less than 2048-bit keylength

o    For RSA it is easy to calculate the public key from private key, but it’s practically impossible (infeasible) to calculate the private key from public key

o    Public and private key are mathematically related to each other, but finding the private key from public key needs millions of years or even more

o    At present, elliptic curve cryptoalgorithms (ECCs) have overtaken RSA in popularity in modern times, but there are many different ECCs

o    Keys of RSA:

  RSA supports an arbitrary keylength

  Most-of-spread keys are the full powers of 2 - 2048, 4096 etc bit keys (256, 512 and 1024 bit keys are already considered to be weak

  Differently from the symmetric cryptoalgorithms the arbitrary bitstream can’t be considered as a key. Keys are generated by a special key generating algorithm

o    RSA: Main Concepts:

  e is public exponent (avalik eksponent)

  d is secret exponent or private exponent (salajane eksponent, privaatne eksponent)

  Such a function which inverse function is infeasible, is called an one-way function (ühesuunaline funktsioon). Examples: multiplying of two primes versus factorization; discrete

exponent versus discrete logarithm

  Such an one-way function which will be feasible for knowing some additional information, is called a trapdoor one-way function (salauksega ühesuunaline funktsioon).

  RSA is just a trapdoor one-way function o Mathematical background of RSA:

  Algorithm is called polynomial (with a polynomial complexity), if for a task of lenght N the solution time is proportional to Nk with some fixed integer k

  Polynomial algorithm is usually considered as a good algorithm:

by the growing of N the solution time doesn’t grow very fast

  Exponential (exponential complexity) algorithms are much more worst: for a task lenght N the solution time is proportional to value 2N

o    RSA Keypair Generation:

  Two big primes p ja q (for 2048-bit key 1024-bit long) are generated

  Their sum (called RSA module) is calculated (n = p • q)

  Such a number e was chosen that it is relatively prime to (p-1)(q1)

  Such a number d was chosen, that d • e = 1 mod (p-1)(q-1)

  Pair (n, e) is a public key

  Triple (p, q, d) is a private key o RSA Enciphering/Deciphering:

  It’s possible to encipher numbers (texts) which are less than pq bits (for a 1024-bit p and q 2047 bits or 618 decimal digits)

  Enciphering process is a discrete exponent        Y = Cip(X) = Xd (mod n)

  Deciphering is also a discrete exponent

  X = Decip(Y) = Ye (mod n) besause (Xd)e = X (mod n) regarding fact that d and e have a property d • e = 1 mod (p-1)(q-1)

o    Why RSA Is Practically Secure?:

  Statement 1: who knows public key (n, e) and plaintext X, but doesn’t know d, p and q, cannot calculate Y

  Y = Cip(X) = Xd (mod n)

  without p, q and d, i.e. can’t encipher

  In order to know d, he/she must know both p and q (by the definition)

  There’s infeasible to calculate p and q from n: a polynomial algorithm isn’t known for factorization

  Statement 2: who know public key (n, e) and ciphertext      Y = Cip(X) = Xe (mod n)

  but doesn’t know d, p , q and X, can’t find a plaintext X

  Because of X = Yd (mod n), there’s necessary to find d

  Finding of d assumes that there are known p and q or discrete logarithm can be calculated in practice

o    Practical Security of RSA:

  The security of RSA is practical – theoretically all is computable (by an exponential amount of calculations) but in practice it’s infeasible

  From private key it’s very simple to find a public key

  It’s infeasible to find a private key from a private key

  Without having a private key it’s infeasible to encrypt so, that it is decryptable by a public key

  If the message is encrypted by a public key, it’s infeasible to decrypt it by a public key

o    RSA: Finding Primes:

  There exist practically usable prime number generators.       Usually a random number is generated and its primality is tested Most of these tests is based on a famous Euler-Fermat’ theorem:

if a and n are relatively prime, then

  a^Φ(n) = 1 (mod n)

  Where Φ(n) is the different number of these numbers which are less than n and are relatively primes accoring to n. If n is a prime, then

  Φ(n) = n-1

  Base of this fact the primality test series can be generated o RSA: Practical Details of Algorithm:

  For finding of an appropriate e there are also some tests which ensure that it will relatively prime with (p-1)(q-1)

  Greater common factor can be checked by an Euklidean algorithm

  Other calculations (enciphering and deciphering) is a question of realising of modular arithmetics (can be done fast both in hardware and software) o RSA: Practical Properties:

  Enciphering and deciphering which use modular arithmtics are quite fast

  Despite of these fact the RSA is slower from symmertrial algroithms (AES, IDEA, Speck, Simon etc) some thousand times

  Keypair generation is much more slower from enciphering/deciphering. However, it can be realized even in software within a couple of seconds

o    Secure Usage of RSA:

  RSA supports any keylenght (lenght of pq)

  RSA is considered to be practically secure from 2048-bit keylength

  Most-of-used values of keylenght are (512, 1024), 2048 and 4096 bits (two first of them are already practically insecure)

  2048-bit key: there’s a composite number of 620 decimal digits which has two 310-digit prime factors

o    Cryptanalysis of RSA:

  Factorization of 70-digit number needs from typical personal computer some minutes

  Factorization of 100-digit number – less than a day

  140-digit number was factozed in 1996 within 5 years by a common efforts of many computers

  The biggest factorized number (AD 2009) is a 232-digit number (768-bit number)

  Factorization of 300-digit number (1024-bit RSA) needs some millions of years (even if we involve cloud computing possibilities)

  It is doubted, that after 5-10 years the 1024-bit RSA might be practically insecure. But 2048/4096-bit RSA probably still remain secure

  A powerful quantum computer can also factorize RSA with a small keylenght, but not yet the RSA with 1024-bit keylenght

       Collaboration of RSA with Symmetric Cryptoalgoriothms:

o    RSA is unsuitable for the encrytion of long plaintexts (approx. thousand times slower as syummetric algorithm)

o    If we use RSA for key exchange purposes, we should only encrypt the symmetric algorithm key

If we use RSA for a digital signature (integrity) purposes then it was

always used together with cryprographic hash algorithms. Therefore, only hash value is actually encrypted (signed) by RSA

       RSA versus Elliptic Curve Cryptoalgorithms (ECCs):

o    Over the past decade, public cryptography based on elliptic curves has been more popular in use than RSA

o    Algorithms based on elliptic curves are 5-10 times more effective than RSA (e.g. P-384 corresponds to RSA with a key of approx. 3000 bits in terms of its security)

o    However, there are many different algorithms based on elliptic curves (specific curves) and the security of the algorithm depends a lot on the curve

o    There is a risk of using an elliptic curve that is quite rare and therefore not sufficiently investigated cryptanalytically

       Edmonds’ postulate (1965):

o    An algoritm is considered to be good, if it’s time complexity can be represented by a polynome O(nk) from an input (task lenght), where k is some integer

o    Such an algorithms are called polynomial complexity algorithms

(polünomiaalse keerukusega algoritmid) o Non-polynoms (factorial, exponent) will grow drastically faster than polynoms

       Other Public-Key Cryptoalgorithms:

o    El-Gamal

o    DSS

o    Paillier’ system

o    Beside of RSA and elliptic curve algorithms they are very much less usable

Lecture 9 (Cryptographic hash functions. Cryptoprotocols, TLS) - 1/11

       Cryptographic Hash or Cryptographic Message Digest:

o    Cryptographic hash (krüptoräsi) or cryptographic message digest (krüptograafiline sõnumilühend) or fingerprint or thumbprint is digest with a fixed lenght which is computed from an arbitrary-length message using an one-way function

o    One-way function (ühesuunaline funktsioon) is such a function which is easy comutable but an inverse function is unfeasible (practically uncomputable)

       Cryptographic Message Digest: Usage:

o    If we have a given message-hash pair and the hash corresponds to the message then we can always sure that the hash is certainly calculated from the given message

o    The use of cryptographic hashes is to protect integrity

o    The main reasons for using hashes is that public key cryptoalgorithm is unable to process large amount of data (is 1000 times slower than symmetric algorithm)

       Inner Structure of Old-Type Cryptographic Hashes:


Essential part of old-type cryptographic hashes is a compression function (tihendusfunktsioon) F, which is an one-way function and founds a fixed-lenght output from a longer fixed-lenght input. Compression function F is used in hash functions iteratively:

       Mandatory Properties of Message Digest (Hash Function):

o    Any (minor) change of message must cause a full changing of a hash o Hash must be easily computable (as a typical symmetric cryptoalgorithm) Hash function must be a one-way function: for a given hash is must be infeasible to find any corresponding message which gives this hash

o    For a pair message-hash the computing of second preimage must be infeasible (hash function must be weakly collision-free or second preimage attack resistant)

o    There must be infeasible to find any such a message pairs which give the same hash (hash function must be collision-free)

o    Compression function F must be collision-free (hash function must be pseudo-collision-free)

       Birthday Paradox:

o    Birthday Paradox: a probability that for N people the birthdays of two different people coincide, will grow proportionally with N2 or will grow quite fast

o    Reason: adding of a new people will add pairs of new people from previous people

o    For N people there are N2 – N different pairs o For N=23 the probability is already greated than ½ o Influence of Birthday Paradox to Hash Functions:

  Conclusion from Birthday Paradox: if the output of hash function is N-bit long, then the probability, that K trials will give two identical hashes is K = 1,17 2N/2

  The simplest cryptanalytic attack (so-called exhaustive search for hash functions) of N-bit hash function needs a considering of 2N/2 different variants

       Good Hash Functions for Practical Use:

o    SHA-2, more precisely SHA-256 (sometimes also called SHA2-256), the main representative of the SHA-2 family. Was constructed on the basis of MD4in 2001, strengthening the security. Hash length is 256 bits (32 bytes)

o    SHA-3 family of 256, 384 and 512 bit (32, 48, and 64 bytes) long hashes - constructed in 2010 in order to resist to new type of attacks

appeared against SHA-1 and SHA-2

o    The practically used hash functions should compute at least 256-bit hash (twice as long hash as was a minimal lenght of practically secure symmetric cryptoalgorithm, i.e. 2 x 128 bits)

       Insecure (Strongly Non-Recommended) Hash Functions:

MD2, MD4, MD5 – hash length is 128 bits (16 bytes). Made by Ron Rivest. Has been found both collisions and practical breaking exploits already 20 years ago.

o    Completly broken already long time ago

o    SHA-1 – hash length is 160 bits (20 bytes) Not yet completely broken, but during the last 10 years its security has been considered too weak to use in practice (collision is found in 2017)

       Backsight to Classics - Detailed Overview of MD5:

o    Hash lenght is 128 bits (16 bytes) o Was constructed by Ron Rivest in 1991

o    Consists of four different rounds (raund), which process the message by the 512-bit portions

o    During each round there was taken the result of previous round and it is “mixed” to the next 512 bits of message

       MD5: Security and Analysis:

o    128-bit hash is very short regarding of Birthday Attack (should be at least 256 bits)

o    In 1993 there were found collisions for a compression function (Boer,

Bosselaers) o In 2004 there were found collisions for a full algorithm (Wang, Feng,

Lai, Yu, one hour for host computer) o In 2005 there was succeeded a practical breaking of signatures based on MD5 (Lenstra, Wang, Weger)

o    In 2017 collisions can be bound by exhaustive seach amount of 224,1 , within one minute (Stevens)

       SHA-1: General Description:

o    Is structurally similar to MD5

o    Was constructed in 1996 by modifying the MD4 making its’ procedures longer and more secure

o    Lenght of hash (digest) is 160 bits or 20 bytes

o    Has four rounds. For each round there was taken the result of previous round and it was “mixed” to the next part of message using special functions

       SHA-1: Cryptanalysis:

o    Wang, Lai, Yu (2005): In the case of SHA-1, collisions can be found through the review of 263 variants, which is about 100,000 times faster than with exhaustive search

o    Donald, Hawkes, Pieprzyk, Manuel (2008): Collisions can be found by reviewing 252 to 257 variants

o    Finding collisions easily does not actually make SHA-1 reversible, so it is not yet easily breakable in practice. But these methods may come during a couple of years

o    Marc Stevens (2012) has estimated one hash breaking cost ca 1 millions of EUR and exhaustive search volume 258 o Shattered attack (2017) needs roughly only 263.1 SHA-1 evaluations February 2017 – first actual collision was found

       SHA-1: Use in Emegrency Situations:

o For an emergency situations, a temporary usage of SHA-1 is allowed only in the following cases:

  In a key strengthening mode (võtmetugevdus) – hash function is used twice in a row. In makes attacking time much more longer

  Salting (soolamine) of passwords and keys – before using a hash function some random bitstream (so-called salt) is added. It makes dictionary attacks (sõnastikründed) must more difficult to realize

o However, old hash functions enhanced by these two techniques should only be used where the most advanced hash functions cannot be used

       SHA-256 (SHA-2): Description and General Facts:

o    It is very similar to classical MD5 by the structure o There is a family, which the best-known representative is SHA-256 o Created in 2001 on the basis of SHA-1 ideology, but making both intermediate values and hash longer

o    In the architecture, both the classic MD5 and SHA-1 are similar – there are rounds, at the beginning of each round, the final result of the previous round is taken and "mixed" into the next part using special conversion functions

o    Today, SHA-256 is de facto standard of commercial cryptography

       SHA-3 Competition:

o    2006 there was launched a new NIST hash competition (SHA-3 contest) since SHA-1 development SHA-2 was not considered to be very sustainable (againts new types of attavk) in the long term

o    In 2009, 14 finalists were found

o    On October 2, 2012, the contest ended. The winner was Keccak. It became SHA-3, the official NIST standard, on August 5, 2015, as a result of a slight conversion.

o    BLAKE was also a finalist - BLAKE2 was developed, which is considerably faster than all previous hash functions

       Sponge Structure as a Basis of SHA-3:

o Sponge structure (käsnastruktuur) is a two-step action:

  Absorbing (absorbeerimine) - there is one large array, the initial part of which is first changed step by step with bits of material to be hashed with F.

  Squeezing (pigistamine) – within the array the conversions are made with the function F and the hash is “read out” between the conversions

  F is an analog of compression function for a sponge structure

       SHA-3 Variants and Inner Structure:

o For SHA-3 a massive S consists of a 5x5 matrix of 64-bit words, with a total amount 1600 bits

Depending on hash length, there are subversions SHA3-224, SHA3-

256, SHA3-384 and SHA3-512 o Early replacement of SHA-2 with SHA-3 was done due to the fact that cryptographers believed that a new sponge structure (käsnastruktuur) is more secure againts new types of attacks as the old structure

       Practical Usage of Hash Functions:

o    Are used for ensuring of integrity both with and without public-key cryptoalgorithms where these are very important mechanisms

o    Are important components of digital signatures and time stamps

       Message Authentication Code:

o    Message authentication code, (MAC, sõnumi autentimiskood) is socalled hash function with a key, where both computing and verifying of a hash needs beside the message also the knowing of a certain secret key

o    Is a necessary replacement of an hash function, when it’s needed to limit the subjects who can authenticate/verify the message by the owners of a key

o    Differs from public-key cryptoalgorithm by the fact, that the both computing and verfying processes for a MAC can be performed with the same key

o    Sometimes message authentication codes has its own specific algorithms. But they can easily constructed by the combining of hash algorithms and symmetric cryptoalgorithms:

       Essence of a Cryptoprotcol:

o    Protocol (protokoll) determines, which information moves between different subjects and who/how/when transforms it

o    Cryptoprotocol is a protocol where transformings include different cryptoalgorithms (symmetric, asymmetric, hash algorithms) and/oe key generations

o    There are a lot of different cryptoprotocols. The most-of-spread cryptoprotocol (in Internet) is TLS (Transport Layer Security)

       TLS: Main Properties and Facts:

o    Is constructed to work in Internet, i.e. in the network which bases on

TCP/IP o Enables to autenticate different (both) parties

o    Enable to change the symmetric algorithm’s key for secure transfer of information and to transfer information securely

o    Includes to the higher-level protocols, adding the security to the basic functionality:

  ssh instead of telnet

  https instead of http            secure ftp/sftp instead of ftp

       TLS Channel:

o TLS makes a secure channel (turvaline sidekanal) over a network which have following three properties:


  Channel is a private. After the parties has changed the encryption keys, all transferrable data are encrypted

  Channel is authenticated. It’s possible both-side authentication but also a single-side authentication

  TLS enables to check the successful receiving of all packages (necessary property for a batch mode information transfer – TCP/IP protocol)

       TLS: Main Principles:

o Under TLS connection there can be distinguished two phases:

  handshaking phase (autentimisfaas)

  message transferring phase o The connection is considered to perform between two unequal parties, a client and a server

o It’s always mandatory to authenticate the server. o Authentication of a client is voluntary (as it needed)

       TLS Handshaking:

o By a little simplified view it includes the following activities (client A starts to communicate with server B):

  A says “Hello” to B and mentions which cryptoalgorithms he/she can use

  A demands from B, that B proves that he is B and sends a generated nonse to B

  B writes a text “I am B” and makes from it a hash or message digest

  hash(“I am B” + nonse)

  B signes hash with his/her private key

  sigb (hash(“I am B” + nonse))

  B sends to A his public key (certificate), a text “I am B” and a signature

  sigb (hash(“I am B” + nonse))

  A, receiving these data, verifies the signature, ensuring that his/her communication partner is realy B. A puts the public key of B to his directory

  Therefore, client A has authenticated server B

  If it’s necessary, B can authenticate A by a similar way (if the both-side authentication is needed)

o    A generates a symmetric cryptoalgorithm’s key (primary key) K, and puts it in his directory. A encrypts K with public key of B and sends it in encryped form to B

o    B deciphers the primary key K with his private key and stores it into his directory

o    Therefore, handshaking phase is ended and a corresponding symmetric algorithm key is stored by the both parties

       TLS: Communication Phase:

Presumption: A and B start to communicate and ensure that handshaking phase has already performed earlier and the corresponding primary key K is already stored into their’ directories.

  A generates a session key S, encrypts it by a primary key K and sends the encrypted key to B

  B deciphers the session key S by the stored primaty key K

  All communication between A and B is performed by encrypted form using a session key S

       TLS: Opportunities and Applicability:

o    SSL/TLS is able without the certificates and its’ infrastructure to ensure that the other party of transferring phase is the same that was other party for handshaking

o    To prove some information about ohter party during handshaking there’s necessary to have some additional information – usually it is in a form of certificate (sertifikaat) of other party. A certificate is signed by a trusted third party (usaldatav kolmas osapool)

       SSL versus TLS:

o    TLS (Transport Layer Security) is a successor of SSL, where numerous disadvantages are eliminated

o    Last version, TLS 1.3, is specified in detail in RFC 8446 (August 2018) o In comparison with SSL3 some weaknesses are repaired o For SSL1 and SSL2 there are some serious disadvantages discovered – in practice their usage is prohibited as well as usage of SSL3

       TLS Security and Problems:

o    If B has his private key and the signed message already sent to A, it’s impossible to masquerade B to A – it is protected by cryptographic algorithms

o    It’s impossible to eavesdrop the communication between A and B without knowing the secret keys

o    But there remains a problem: if instead of real B the communication with A was started by a “false B” it can’t be discovered by A

o    This problem cannot be solved only by TLS – it needs some additional resources (usually certificates)

       Other Cryptoprotocols:

o    DNSSEC (Domain Name System Security Extensions) – replaces ordinary (unsecure) DNS

o    IEEE 802.11 – wireless local area protocol o IPSec (IP Security Protocol) o S/MIME (Secure MIME) – replaces ordinary (unsecure) mail service

o    SSH (Secure Shell) – secure remote access Lecture 10 (Electronic Signature) - 8/11

       Evidentiary Value of a Digital Document: a Big Problem:

o    A fact: a digital information is usually kept and stored in a way which can’t permanently bind data to the data carrier. Multple writable media

(hard disks, flash memory etc) are mass-used

Conclusion: for a digital documents we can’t use the same methods as we use in paper documents – handwritten signature doesn’t help us because there isn’t any media by which the signature is related

o    Therefore, we must achieve evidentiary value in a different way

       Acheiving the Integrity:

o Data integrity (andmete terviklus) is a ensuring that data are originated (information was stored into the data) by a certain subject and haven’t been altered (both occidentially and by a deliberate act)

       Digital Signature:

o    While it’s impossible to achieve the integrity (evidentiary value) on the level of data carrier, it’s only possibility to use such a technique which bind the data itself to the creator using mathematical relationships

o    This technique is called electronic signature or e-signature or legal digital signature (e-allkiri, digitaalallkiri, digiallkiri) which is up to present the only way, how we can ensure the provable integrity under the multiple rewritable media paradigm and client-server paradigm

       Technical and Legal Views to Digital Signature:

o A legal digital signature (digitaalallkiri, digiallkiri) or e-signature (eallkiri) or electronic signature is a legal concept which gives the document an evidentary value as handwritten signature gives such a value to paper-based document

       Essence of (Legal) Digital Signature:

o (A legal) digital signature (digitaalallkiri, digiallkiri) or e-signature (eallkiri) is an additional data set which is added to signable document (signable data set) and which is created by a signer (allkirjastaja) using both the signable document and a private key of a signer and is performed by mathematical operations

       Giving of a e-Signature:

o In order to give an e-signature (legal digital signature), the isgner must have an earlier-generated public-key cryptoalgorithm keypair, which consists of:

  private key

  public key

       The Role of Hash in e-Signatures:

o E-signature is usually given to hash of a document, not to an original (long) document

       Private Key and its Usage:

o    Everyone, who has a private key of a signer can give unauthorizedly a e-signature of a signer

o    Therefore we must keep private key extremly carefully preventing its unauthorized usage and spread

       Splitted Key for Estonian Smart-ID project:

o Estonian Smart ID is exception – it does’nt use non-reverse engineering technology for creating and storing private key.

Estonian Smart ID uses a splitted private key principle:

  One part of private key is held in physically and organizationally secured server environment

  Another part of private key is held (in secured and encrypted form) in Smart ID app in smartphone

  There are a rigid and secure session management from server side - it avoids unnoticed and unauthorized duplicating of Smart ID app (client side)

       Principles of Certification:

o    Binding of personal idenfication data (name, personal identification number) to public key is called certification (sertifitseerimine)

o    A result of certification (by the means of legal digital signature) is a certificate (sertifikaat) which is always a digital document

o    Certificates are usually issued by a special certificartion authorities (CAs, sertifitseerimiskeskus, sertifitseerimisteenuse osutaja) which acts as a trusted third party (usaldatav kolmas osapool)

o    A certificate (sertifikaat) is a digital document created and signed digitally by a certification authority. Certificate typically consists of personal data of signature owner, private key of signature owner and a specific certificate-related data (issuer, expiration date, issuing time etc) o Potential problems with CAs:

  We can’t excude the cases when a private key will still appear to the hands of unauthorized persons. In these cases the unauthorized owner of key can produce an unlimited amount of e-signatures

  Solution: we must allow the revocation of certificates

(sertifikaatide tühistamine)

       Timestamping:

o    We need to acheive the long-time evidence value of signed documents: (dokumentide pikaajaline tõestusväärtus) we must to able prove the evidence even a couple of years or even decades after the hypothetical revocation of corresponding certificates

o    We must add a provable time-stamp to all activies/results (signatures, revocation acts etc)

o    We must create a comparison possibility of different time stamps before signature verification (is there any revocation act if force which time is earlier than a time of a signature?)

o    Time-stamp (ajatempel) is an additional data set which is added to the original data set. There’s possibility to provably compare the creation time of different timestaps (data sets)

o    Time-stamp authority calculates the next time-stamp from the following sources using one-way funtsion (hash):

  from the data (hash), which is sent to time-stamp authority

  from the previously issued time-stamp


       Shortcomings of Revocation List Technique:

o Each signature verification needs online-query o Revocation list will became a real bottle-neck o What will happen with signed documents when certification authority (revocation list) disappear (has been destroyed)?

       A Suitable Technique: Validity of Approval:

o Validity of approal (kehtivuskinnitus) is a query to certification authority which has made immediately after giving a digital signature

       Certification Infrastructure:

o Certification Infrastructure (sertifitseerimise taristu) or Public-Key Infrastructure (PKI,, avaliku võtme taristu) consists of five following mandatory components necessary for secure giving and verifying of digital signatures:

  public-key container (typically hardware-based and nonreverese-engineerably realized)

  certification authority (CA)

  validity of approval service (at the CA)

  time-stamping authority

  organization and coordination of services (usually in national or EU level)

       Data Format:

o    Data format (andmevorming, vorming) — a desciption how different type of information – text, picture, voice, video etc – is coded into the queue of 0’s and 1’s

o    A pre-agreed (standardised) data format gives to data (to data file) a concrete and unique meaning. If we have data but do not have the data format desciption, then we do not have the information, carried by the data

       Electronic Signature in Estonia and in EU:

o Estonian Digital Signature Act was developed in 1996-99 and came into force in 2000. Estonia was one of very first countries in world who adopted the corresponding act and also developed the corresponding

PKI and mass-spread ID cards as signing devices o Since October 2016, the Estonian Digital Signature Act is not force, it is replacing by the corresponding EU Regulation 910/2014 “On electronic identification and trust services for electronic transactions in the internal market” (eIDAS). o Estonian digital signature infrastructure (PKI) is built on basis of Estonian nationa act during last 21 years (now it also complies with the euro regulation)

Lecture 11 (Organisational and legislative side of electronic signatures in EU and Estonia) - 16/11

       A legal digital signature (digitaalallkiri, digiallkiri) or e-signature (e-allkiri) or electronic signature is a legal concept which gives the document an evidentary value as handwritten signature gives such a value to paper-based document.

       E-signature in Estonia:

o    Estonian Digital Signature Act was developed in 1996-99 and came into force in 2000

o    Estonian digital signature infrastructure (PKI) is built on basis of Estonian nationa act during last 16 years (now it also complies with the euro regulation)

o    In 2001 there was registrated the first certification authority – predecssor of SK ID Solutions. It received also time-stamping license and became a major subject in Estonia in the field of digital signatures.

According to eIDAS SK is a trust service provider o The validity of approvals (OCSP-confirmations) service is charged (for a physical person up to 10 signatures in one month are free of charge)

o    In 2002 there were issued first Estonian ID-cards. Today almost 100% of Estonian cresidents are equipped with such a cards (it’s mandatory)

o    In late 2002 there was introduced the practical digital document format DigiDOC and an appropriate software for creating them. In these times it produced .ddoc files (later there was a migration to .bdoc and .asice)

o    In 2011, there was introduces DigiDOC3, which uses a 2048-bit RSA keypair

o    In October 2017 there was an extraordinary migration of Estonian ID cards to elliptical curve algorithms (P-384) in connection with found vulnerability in Infineon-made ID card chip architecture

o    At the end of 2018, the .asice format became the default e-signature

(digital signature) format in Estonia. BDOC that has been in use since

2014-15 disappeared from active use o Qualified electronic time stamp (kvalifitseeritud e-ajatempel):

  an electronic time stamp which meets some specific technical requirements o Validation data (valideerimisandmed)

  data that is used to validate an electronic signature or an electronic seal o Validation (valideerimine)

  the process of verifying and confirming that an electronic signature or a seal is valid

       EU eIDAS regulation 910/2014, different types of e-signatures:

o Electronic signature (e-allkiri):

  Data in electronic form which is attached to or logically associated with other data in electronic form and which is used by the signatory to sign o Advanced electronic signature (täiustatud e-allkiri):

  e-signature which meets which meets a number of technical conditions o Qualified electronic signature, QES (kvalifitseeritud e-allkiri):

  an advanced electronic signature that is created by a qualified electronic signature creation device, and which is based on a

qualified certificate for electronic signatures o All Estonian solutions (ID-card, mobile-ID and Smart ID) belong to QES

o Demands to Qualified E-Signature:

  Mandatory QES properties according to eIDAS:

  Qualified e-signature is uniquely linked to the signatory

  Qualified e-signature is capable of identifying the signatory

  Qualified e-signature is created using electronic signature creation data that the signatory can, with a high level of confidence, use under his sole control

  Qualified e-signature is linked to the data signed therewith in such a way that any subsequent change in the data is detectable

  Also Estonian Digital Signature Act (was in force between 2000 and 2016) involved quite similar (and even a littte higher) demands

Lecture 12 (Security of Digital Record Management) - 22/11

       Digital Record Management – Security Aspects:

o    Digital record keeping is a record keeping where documents are in digital form

o    Essential aspects from the security view:

  Usage of digital signature

  Managing of multiple notes (props) in digital form

  Digital document archiving

  Ensuring the integrity of digital registers

       Advantages of E-Signature in Digital Record Management:

o    If we get a digitally signed document and the signature verifies, then we should always be sure that the author of document has signed it using its real name

o    Digitally signed document is certainly signed by the person, which name is included into the signature (certificate).

o    When e-signature verifies successfully, we should always be sure that the document itself hasn’t changed after the signing process

o    We can always prove the creating (signing) time of e-signed document.

       First       Disadvantage       of       E-Signature         Regarding Digital        Records Management:

o    There’s a possibility to stole the ability to the giving the signature – we must always carefully monitor that the private key remains only to the hand of keypair owner

o    The biggest risk is OD (zero-day) malware, which takes a control over console – an appropiate countermeasure is ID-card reader with a pinpad

       Second Disadvantage of E-Signature regarding Digital Records Management:

o We should exclude the e-signing of documents which may have multiple meanings. We must reduce the file formats we e-sign and must be aware of them.

       Third Disadvantage (?) of E-Signature regarding Digital Records Management:

o    Digitally signed document must remain in digital form forever, for all phases of its’ lifecycle. Any converted or printed version will loss the evidentiary value of a document

o    Actually it’s not disadvantage, but a special property - why we must return to the paper?

       Activities Associated with E-Signatures:

o Four main practical activites (all of which have to take into account the different security aspects):

  Choosing of software for an e-signature (and a trust service related to avtual product)

  Giving of a signature

  Verifying of a signature Cancellation of a certificate

       Signing Process - Recommendations:

o    Keep your computer malware-free and make sure to obey appropriatary e-hygiene rule

o    Keep and enter PIN-codes in secure way (place)

o    Keep ID-card inside the computer as short as possible (only for a signing period)

o    Check out trust service policy and signature creation devide requirements and make sure it give a qualified digital signature

o    Take time-stamp and validity of approval immediately after signing o Do not spread and store digital signatures without time-stamp and validity of approval (and do not use these standards with needs online connections during verfication phase)

o    If the signature verification fails, then report it to the signer and please him/her to send the signed document once more - probably it has changed occasionally (it’s quite unlike that it is really faked)

o    If the software or signature format are unfamiliar, please introduce the software, trust service policy and the signing policy and please ensure that e-signature is qualified e-signature

o    Verify the signature certainly before you open and use the signed document

o    Do not accept any digital signatures without time-stamp and/or validity of approval and these which need online connection during verification phase

       Certificate Cancellation - Recommendations:

o    Keep you ID-card and PIN-code securely and carefully

o    Cancel your certificate always and immediately when you have serious doubts than anyone have stolen both your card and PIN-code

o    Introduce the actual cancellation policy of your trusted service (for example Estonian SK ID Solutions has a short telephone number 1777)

o    Please take into account that a certificate canecellation necessity may arise in a very unexpected situations

       Problems of Original and Copy of a Document:

o    For a paper documents we always distinguish original and copies. There is always a certain (fixed) number of originals. o For digital docunets there are no copies - there are so many originals as we keep different copies from a file.

o    The rewritten number does not differ from the original number

       Problems of Different Props (Notes):

o    If we add additional prop (e-signature) to paper document, then the previous version stops its’ exsistence. When we add an additional prop (with e-signature) to digital document, then the previous version might be stored and remained

o    Conclusion: for a digital record management we always should especially distinguish all different versions (versions with different number of signatures). They all may exist

       Digital Archiving: Differencies from Paper Documents:

o    Paper document is usually archived after the end of active use. Digital document is usually archived immediately after the last chaning (last prop/signature) which is added to the document and usually before the active use

o    Archiving of digital document is always performed in original, digital form

o    The evidentiary value of archived digital document is always also ensured by a digital signature

       Main Theoretical Problems of Digital Archieving:

o which arise differently from paper documents:

  data carrier preservation problem

  data format problem

  evidentiary value (integrity) problem

       Data Carrier Preservation Problem:

o    We should choose such a data carrier type which preserves its’ physical properties and enables to read the data for a long period of time

o    Additionally, we must ensure that after a couple of years (decades, centuries) there will be available such a device, being able to read this type of data carrier

-Additional problem: we don’t know the long-term behaviour of new type of data carriers o Solution:

  There’s no problems with technical devices (in the world there exist prototypes of all machines that mankind has constucted up to present)

  The long-time worning (“aging”) of data carrier is really a big problem (especially for a new materials which are not yet tested over a long time)

  But for a digital data it is actually a pseudoproblem we can successfully overcome: we can alwas copy the data to the new data carrier

       Data Format Problem:

o Problem: we must ensure that contemporary file formats (RTF, DOC, HTML, MP3, GIF) can be read even after the decades and centuries

       Evidentiary Value Problem:

o Main difference between paper documents and digital documents: evidentialy value of paper document is based on physical values which remain intact for a long-term perspective. o Evidentiary value of digital document is based on mathematical properties of cryptoalgorithms which became breakable for a long-time perspective

       E-Signature Security is Constantly Diminishing Over Time:

o E-signature security is based on three principles:

  Security of public key encryption algorithm - without the private key it is not possible to encrypt it so that the result can be deciphered with the same key

  Hash algorithm security – message (document) cannot be changed so that hash remains the same

  Security of the chip and assistive technology used o All three of these - especially the first two - are aging. The "best before" of encryption algorithms usually lasts 10-20 years

       Dilemma - to Preserve Data or Data Carrier?:

o One of the main prioperties of digital documents: differently from paper documents it isn’t permanently related with data carried and can be infinite times copied

  preserving of paper document = preserveing of paper sheet

  preserving of digital document = preserving of a file

       When E-sinature Became Insecure:

o    E-signature (with a digital document equipped with it) becomes insecure when breaking techniques are found either in the hash algorithm, public key encryption algorithm or other technology (ID card, timestamp, etc.)

o    When any of these factors became practically breakable, then the whole digital document will be completely compromised

       “Hissing” – Slow Becoming Insecure:

o There are two reasons:


  Appearence if faster computers (computer performance and price ratio increases twice a year and a half)

  Appearence of new cryptoanalytic means o In Estonia the renewal of digital documents with digital signatures (over-stamping) has already taken place once - in 2017 by the TeRa. Sometime it will definitely happen again

       “Bang” – Fast Becoming Insecure:

o    Occasional finding an undocumented feature of hardware/software which can be considered as a serious vulnerability

o    It is believed that there will be found an appropriate solution. For example in Estonia in 2017 when inside Infineon-made ID card was detected an error, the problem was solved very operatively

       Solution to Evidentiary Value Problem:

o        Solution: we should oversign (ülesigneerimine) long-term preserved document before the previous signature will become practically breakable. o Resinging must be performed by a new, stronger algorithms, which lasts again 10-20 years (before new oversigning)

o        Oversining helps for “hissing” case, not for “bang” case o Preferrably there should be used some more complex measures for a “bang” case

       Essence of Oversigning:

o         Oversiging of a document can be considered as a statement “I saw the document in a verifiable form and the mathematical algorithms of the previous signature are not yet broken. o I confirm it by a new digital signature which is based on stronger mathematics“

o         It can be done automatically withput the direct presence of physical persons as “over-sealing”

o         It creates comparison and verificartion possibilities for the future. o The moment of oversigning can be proved by a corresponding timestamp

       From a Paper Document ro Digital and vice versa?:

o Main principle: needs some instances which must have certain responsibilities and abilities for a digital signature. These activities will always produce a copy of an original document, never an original

  Paper -> Digital. Must be scanned and equipped with the digital signature of the instance

  Digital -> Paper. Is printed out and equipped with the handwritten signature of the instance

Lecture 13 (Basics of Database and Network Security)

It’s assumed that data is represented by a relational database (relatsiooniline andmebaas) - tables, their’ relationships, records, fields etc

       It’s necessary to achieve a confidentiality separately for a different fields. We must ensure that there can be realized an access for a different subjects to different fields

       It’s determined outside the database, who (which user groups) can read and create/change different data

       It’s necessary to ensure integrity for both the (sometimes multiple-changable) data and the whole database. Sometimes it’s necessary to determine the whole history of data entity (previous forms and all editors)

       Usually it’s assumed, that different database users having a writing access to the same data

       The Simplest Approach: an Application-Software Based:

o    The storing of different events (data adding, changing etc) will performed by application software

o    Users authenticate itself using their’ user names and passwords o Application software together with database works on server, which is directly accessable only by system administrators

o    Shortcoming: database is stored (in uncrypted form) to server and administratirs can access (also can change) the data – risks concentration is quite high

       Errorness of Application Software:

o    Actually each application software has some errors (vulnerabilities). Sometimes these errors are critical allowing to access or change something by an unauthorized subject

o    Usually there will issue patches in order to repair these vulnerabilities o Cruel reality: between the publishing of vulnerability and making of a patch the software is often remained unprotected for the corresponding attacks

       Integrity of Full Database:

o    A reality: if we equip each record (field) of a database with a (legal) digital signature, it ensures the integrity of a record, but doesn’t ensure the integrity of full database

o    There will remain the possibility to erase unauthorizedly and undetectedly the whole records (together with their digital signatures)

       Integrity versus Accountability:

o    Integrity (terviklus) means that we should determine the source

(creator, creating time) of a data o Accountability (jälitatavus) means that we should know all the history (all previous states, creators, changes, changing times etc) of a certain entity

o    If there’s allowed the changing of previously stored data, then instead of integrity there’s often used and needed an accountability Ensuring the Integrity of Full Database: o Solution: additionally to digital signatures we should equip a database with the (cryptographical) mechnisms which tie different records to

each other and therefore prevent their’ unnoticable erasing

o    This can be done by a queue of cryptographic hash functions (next record must include the hash of the previous record) – so-called “local time-stamp”

o    In these cases we can’t never erase something from the database

       Properties of Hash Queue Ensuring the Integrity:

o Advantages:

  Each erasing of full record will be always noticable (queue of hashes doesn’t verify)

  We can also give the evidentiary value to a negative query results

  The integrity of records itself can be protected by a digital signature

o Disadvantages:

Needs the implementing of hash (hash queue) and their’ verifiability check in the level of database application software

       Ensuring the Confidentiality of a Database:

o    We cannot encrypt these attributes of a database which must be considered as secondary keys (used a basis for a search)

o    These data must be available for a database engine (database environment) as a plaintext

o    Ensuring the confidentiality of these data for a database administrators needs a special accessing achitecture or is impossible

o    Other data (attributes that we don’t consider as secondary keys) can be replaced by a ciphertext (and be made unavailable for a database environment) with the appropriate key distribution system

       Database Operations Fail for Encrypted Data:

o From confidentiality point of view there are two main activities in database:

  Atribute A has a value X. Can be performed for both, nonencrypted and ancrpypted data (ciphertexts are equal only in

these cases when plaintexts are equal)

  Attribute A is greater/smaller than X. Can be performed only in these cases when database engine have an access to nonencrypted data o Therefore there arises a problem – we can’t keep data ion server (cloud) side so that the server (cloud) manager cannot access the non-encrypted data

       Solving of Encrypting Problem in Cloud:

o A perspective solution is shared security technique - data is divided and pieces are stored in two or more locations (locations) so that they can restored only on the client side, not on the server (cloud) side. It is done

my methods which retain the basic database operations Shared Security – Possible Shortcoming?: o When two clouds have common nodes, it can cause a weak place whish easy to hack


       Most-of-Used Practical Solution to a Database Confidentiality Problem – using HSM:

o Principle: data are stored into disk in encrypted form - there is a hardware security module (HSM, riistvaraline turvamoodul) included to database which enables to encipher/decipher and to generate/hold a corresponding key

       Principle of HSM for Securing Databases:

o    Database store data to disk in encrypted way – encryption is done by a hardware security module HSM which locates between disk and database and keeps an encryption key by a non-reverse engineerable technique (impossible to read key out)

o    Shortcoming - HSM is expensive device

       Ensuring the Availability of a Database:

o    Usually is ensured by an arhciving or backuping (arhiveerimine, varundamine): we store the same data in many physical places

o    This allows to reduce the confidentiality risk

       Basics of “Network“ (Internet):

o Contemporary WAN (Wide Area Network) is usually an Internet o Internet is a network which is based on TCP/IP protocol where all transfered information is divided to (and managed by) certain IP packets which are considered and transferred separately

       Threats from a Internet:

o A symmetry principle: as well as we can access to the Internet (Internet services), as well the user from an Internet can access our computer or local network (services available there)

       Shortcomings of an Open Internet Access:

o    Paradox: a hacker can easily access to your system or network

o    In a typical computer/LAN there operates a couple of services/protocols and some of them are certainly harmable and has some vulnerabilites

       A Typical Solution: Firewall:

o    A multifunctional firewall: (tulemüür) a special gateway between

Internet and your computer or local network) o May be both, a hardware device or software product

o    As a rule, controls all the traffic between Internet and physically secured computer or local network, allowing only some services/protocols in a pre-defined manner

o    For a hardware device (local network separation) uses proxies for services and allows to use independent address space behind the firewall

       Main Shortcoming of a Firewall:

o    For an authorized users it hinders to access to the local resources (local network) from the other parts of Internet

o    Conclusion: it restricts the Internet-related remote access possibilities (virtual office, telecommuting, etc)

       Solution for a Remote Access: Encryption and Signing:

o    Reality: typical Internet services (protocols) – http, telnet, ftp, smtp – are not secure, i.e. does not allow secure and authenticated communication. They can be easily both eavesdropped or changed by a classical man-in-the middle attack

o    Hint for a secure remote access: we should use both encryption

(protects confidentiality) ja signing (protects integrity)

       Firewall + Secure Remote Access Client:

o Secure Remote Access Client (turvaline kaugpöördusklient) uses the encryption and signing of transferrable data, ensuring both confidentiality and integrity of communication

       Virtual Private Network:

o    A typical Secure Remote Access Client is a suitable solution when we have one physical (physically protected) local area network and a lot of remote clients in different places (an example – a company and its’ telecommuters)

o    But there arises another problem – a company with several (physically protected ) local networks in different places which we wish to use as a single system with its’ services, resources etc. o For a typical and user all different physical networks together seems to be as one big local network

       Main (Classical) Tools for Network Security:

o    Firewall (tulemüür) for a secure connection of a local network (single computer) to Internet

o    Secure Remote Access Client (turvaline kaugtööklient) which allow a secure connection which may go even through fireewalls etc and

enables autehntication of the related parties

o    Virtual Private Networks (virtuaalsed privaatvõrgud) which can connect different physically secured networks into one unique virtual network

       Necessary Additional Components for Network Security:

o    Password management (paroolihaldus): who generates, how stored, how transferred and used etc

o    Key management (võtmehaldus): who generates, how stored and kept, their’ relationship with passwords and devices etc Authentication means (autentimisvahendid): non-reverseengineerable chipcards, HSMs, biometrics, passwords, etc

o    Remainder: TLS (SSL) needs an additional information

(certificate) during handshaking

Lecture 14 (Typical Best Practice of Security Management) - 6/12

       Security Management: What and Why?:

o Now we try to answer to the following question: How (result of which activities) the sufficient level of security can be achieved?

Information security management should always be a continuous process involving all phases of information system, it’s never an onetime action

       Aspects of IT Security Management: o Change Management o Configuration Management o Risk management:

  Monitoring

  Security Awareness

  Risk Analysis

       Typical Phases of Security Management:

1.                 Developing of (IT) Security Policy

2.                 Determining of roles and responsibilities inside the organization

3.                 Risk management, including the defining of protectable assets, threats, vulnerabilities and risks and choosing principles of applicable safeguards 4. Determining of principles of contingency planning and disaster recovery

5.                 Choosing and implementing the safeguards (performing of a security plan)

6.                 Implementing of a security awareness program

7.                 Follow-up activities (maintenance, monitoring, incident handling etc)

       (IT) Security Policy:

o    (IT) security policy is a set of general rules, guidelines and procedures which are directed towards IT assets administration, protection, and allocation inside the organization

o    Usually the role of IT in contemporary organization is so high, that IT security policy and general security policy (without the special stressing of IT) are undistinguishable from each other and form a common document

       Essential Role of Business Management:

o    Only business management (usually on CEO level) know, how different aspects of different assets affect the corporative goals of organisation and to what extent

       Four Important Properties of (IT) Security Policy:

o    Should provide general objectives for all assets with achieving the consistency

o    Should clearly define the relationship between security policy, IT policy and marketing policy

o    Should clearfly determine the ways, how the security problems will be solved in different areas/data (detailed risk analysis, baseline approach etc)

o    Should        clearly         determine    responsibilities      and    duties           inside organization(s)

       Typical Content of Security Policy: o Introduction o Security Objectives and Principles o Security Organization and Infrastructure


IT Security/Risk Analysis and Management Software

Information Sensitivity and Risks o Hardware and Software Security o Communication Security o Physical Security:

  location of facilities

  building security and protection

  protection of building services

  unauthorised occupation

  PC/workstation/smartphone and media accessibility

  protection of staff

  protection against the spread of fire

  water/liquid and lightning protection

  hazard retection and reporting

  protection of equipment against theft

  protection of the environment

  service and maintenance control o Personel security o Document/Media Security:

  storage

  handling

  disposal o Contingency planning o Teleworking o Outsourcing Policy o Change control o Appendices:

  A. List of Security Guides

  B. Legislation and Regulation

  C. Corporate IT Security Officer Terms of Reference

  D. Terms of Reference for IT Security Forum or Committee

  E. Contents of an IT Subsystem Security Policy

       Roles of (IT) Security Forum:

o    Typically, there are six main roles:

1. To advise the IT steering committee regarding strategic security planning 2. To formulate a corporate IT security policy in support of the IT strategy and obtain approval from the IT steering committee

3.                 To monitor the implementation of the IT security program

4.                 To review the effectiveness of the corporate IT security policy

5.                 To promote awareness of IT security issues

6.                 To advise on resources (people, money, knowledge, etc.) needed to support the planning process and the IT security program implementation

            Typical Roles of IT Security Officer:

o    Assembling and aintaining the IT security policy and directives o Implementation of risk analysis

Formulating and implementation of security program

Liaison with and reporting to the IT security forum and the corporate security officer

o    Coordinating incident investigations

o    Managing corporate-wide security awareness program

o    Determining the terms of reference for IT project and system security officers (if these systems exist)

       Risk Management Involves Four Main Activities:

o    Determining the suitable risk management strategy regarding of IT security poilicy (four main alternatives)

o    Choosing of appropriate safeguards using one of above-mentioned risk managent strategy

o    Forumulating of subsystems’ IT security policies (if these exist) and possible chaning of genreral IT secuity policy

o    Compilation of IT security plans for implementing chosen safeguards

       Goal of Risk Management:

o    Goal of risk management: to implement exactly such a set of safeguards, which lead a security risk (the significance of theats + and its realising probability through vulnerabilities) to the level of the accepted residual risk

o    Typically these acceptable risks are determined by the business process and given to IT specialists (IT security specialists) as existing values

       Main Alternatives of Risk Management:

o    Detailed risk analysis (detailne riskianalüüs):

  An ideal cases o Baseline approach (etalonturbe metoodika):

  A convenient way in a lot of practical cases o Mixed approach (segametoodika):

  Takes the best elements from both baseline and detailed risk analysis combining them

  Informal approach (mitteformaalne metoodika):

  A real practical alternative to systematic (formal) approaches

       Security Plan:

o    (IT) security plan is a document which determines the concrete activities and responsibilities for realizing all necessary safeguards

o    Should typically involve four components:

  Estimation of the installation and running costs for these safeguards

  List of activities necessary for implementing the determined safeguards

  Detailed working plan for implementing safefguards with responsibilities, schedule, budget and priorities

  List of necessary follow-up activities

       Implementing of Safeguards:

For implementing of safeguards is responsible IT security officer Typically three following aspects should be taken into account:

  Cost of safeguards should always remain within pre-determined (agreed) limits

  Safeguards should always implemented and installed correctly, according to information security plan (and policy)

  Safeguards will be used (maintained) correctly, according to information security plan (and policy)

       Security Awareness Program:

o Security awareness program must involve all employees, including corporate management and all non-IT empolyees. It must cover both the main topics of IT security policy and each employee’s concrete scope and responsibilities of work

       Confirmation of Safeguards:

o When all safeguards are implemented, there’s necessary to confirm the set of safeguards (officially, by the act signed preferrably by CEO)

       Follow-Up Activities (After Development):

o Typically five important type of activities:

1.                 maintenance

2.                 security compliance checking

3.                 monitoring

4.                 incident handling 5.         change mangement

            Maintenance:

o Security maintenance activities include five components:

1.                 Periodic inspection of all resource

2.                 Checking of log files

3.                 Modifying parameters to reflect changes and additions

4.                 Re-initiation of seed values or counters

5.                 Updating with new versions

o NB! Security is a always a continuous process, never a single project which is made by a campaign!

       Security Compliance Checking (Audit):

o    Security compliance checking is a compliance checking of a system against security policy, security plan and againts all chosen safeguards       Security Monitoring:

o    Security monitoring gives information (usually for corporative management, i.e. CEO level) typically about four circumstances:

  What has been achieved (compared to defined targets and deadlines)

  Are the performed tasks/activities satisfiable of not? (What is missing? What can be done better?) What should be done with which priorities? Need to review something already done?

       Incident Handling:


Obligations and procedures for security incidents involving relevant personnel and these should included in their’ job descriptions

o    Determined and tested (operable) communication channel for reporting security incidents for all employees

o    A functioning channel for developers and administrators to get operative information about new vulnerabilities

o    Contingency plan together with typical incident recovery plans o Procedures for documenting and assessing incidents (job of security officer)

o    Incident analysis must be always documented (and later discussed) including following aspects:

  What and when happens?

  Whether staff followed the plan?

  Had the staff the necessary information at right time?

  Which should have been done differently?

o    Following an incident investigation, usually one of two states is fixed:

  A previously accepted residual risk has been realized. There is no need to do anything else in the future

  There was realised earlier an unkown and undocumented risk. Substantial changes must be made to the future security management (new measures, modification of the recovery plan, modification of the security policy, etc.)

       Changes Management: o Involves all activities, features, objects etc:

  new procedures

  new properties

  software updates

  hardware revisions

  new users to include external groups or anonymous groups

  additional networking and interconnection

       Four Key Aspects of Security Management:

o    Successful realization of security management is a necessary prerequisite for a successful organisation (information) security

o    Security management must involve all organisation, including also non-IT workers and parts

o    The initiative for a sercurity management must come from the corporative management side, who must be involved in conceptual level (securoty policy)

o    There’s necessary for some institutions, documentation, responsibilities etc

Lecture 15 (Protection of Personal Data; GDPR) - 13/12

       Why to Set Restrictions to Personal Data Processing?:

In order to protect privacy of persons: contemporary digital and networked world allow very fast complex searching from different databases including the details reflecting the privacy of persons.

       Strasbourg Convention as a Basis of Personal Data Protection:

o    January 28th, 1981, ETS 108

o    The purpose of mentioned convention was to secure in the territory of each Party for every individual, whatever his nationality or residence, respect for his rights and fundamental freedoms, and in particular his right to privacy, with regard to automatic processing of personal data relating to him ("data protection").

       EU Directives 95/46/EU and 2016/679:

o    Was adopted by the European Parliament and European Council in

October 24th, 1995 o Provided a good practice of personal data protection in Europe, including Estonia, which has taken over by the Estonian National

Personal Data Protection Act o On spring 2016 it was replaced by new EU regulation 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (GDPR). GDPR entered into force in May 2018

       What is EU-based General Data Protection Regulation (GDPR)?:

o    European Parliament adopted a new common EU-based General Data

Protection Regulation (GDPR) 2016/679 on April 2016 o GDPR came into force throughout the all European Union since May 2018, two years after its approval

       What are Personal Data?:

o    Personal data is any information relating to an identified or identifiable natural person (‘data subject’)

o    Personal means an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person

       When GDPR does not apply:

o    GDPR applies to the processing of personal data wholly or partly by automated means and to the processing other than by automated means of personal data which form part of a filing system or are intended to form part of a filing system

GDPR does not apply in the following four cases:

  In the course of an activity which falls outside the scope of EU law;

  By the Member States when carrying out activities which fall within the scope of EU Common Foreign and Security Policy

(Chapter 2 of Title V of the TEU)

  Personal data are processed by a natural person in the course of a purely personal or household activity;

  Personal data are processed by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security

       Pseudonymisation of personal data:

o    Pseudonymisation of personal data is processing of personal data in such a way that personal data can no longer be linked to a specific person

       What is processing of personal data?:

o    Processing of personal data is any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means

       Controller of personal data:

o    Controller of personal data (vastutav töötleja) is the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data

       (Authorized) processor of personal data:

o    (Authorized) processor (volitatud töötleja) is a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller

       Third party:

o    Third party is a natural or legal person, public authority, agency or body other than the data subject, controller, processor and persons who, under the direct authority of the controller or processor, are authorised to process personal data

       Personal data processing principles:

o    Transparency:

  Personal data shall be processed lawfully, fairly and in a transparent manner in relation to the data subject o Legality and purposefulness:

  Personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. Further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall not be considered to be incompatible with the initial purposes

o    Minimality:

  Personal data shall be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed o Accuracy:


  Personal data shall be accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay o Storage limitation:

  Personal data shall be kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed. o Security (integrity and confidentiality):

  Personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures

       Transparency:

o    Transparency (in the context of GDPR) means that that any information addressed to the public or to the data subject be concise, easily accessible and easy to understand

       Lawfulness of personal data processing:

o    Data subject has given consent to the processing of his or her personal data for one or more specific purposes

o    Processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract

o    Processing is necessary for compliance with a legal obligation to which the controller is subject

o    Processing is necessary in order to protect the vital interests of the data subject or of another natural person

o    Processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller

o    Processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party

       Consent of data subject:

o    Consent of the data subject (andmesubjekti nõusolek) is any freely given, specific, informed and unambiguous indication of the data subject's wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him/her

o    Where processing is based on consent, the controller shall be able to demonstrate that the data subject has consented to processing of his or her personal data

o    The data subject shall have the right to withdraw his or her consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. It shall be as easy to withdraw as to give consent

o    When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract

       Special categories of personal data:

o    Special categories of personal data involve following eight types of personal data:

o    data revealing racial or ethnic origin o data revealing political opinions o religious or philosophical beliefs o data concerning trade union membership genetic o data

o    biometric data for the purpose of uniquely identifying a natural person o data concerning health data

o    concerning a natural person's sex life or sexual orientation

       How should be processed special categories of personal data?:

o    Processing of specific categories personal data is covered by rules different from the rules of the rest of personal data. Only ten specially designated processing cases are allowed

o    The six basic rules of so-called “ordinary" personal data do not apply to specific categories of personal data;

o    EU Member States may impose additional restrictions on special categories of personal data

       Ten cases, when the processing of special categories of personal data is allowed: o Data subject has given explicit consent to the processing of those personal data for one or more specified purposes

o    Processing is necessary for the purposes of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment and social security and social protection law

o    Processing is necessary to protect the vital interests of the data subject or of another natural person where the data subject is physically or legally incapable of giving consent

o    Processing is carried out in the course of its legitimate activities with appropriate safeguards by a foundation, association or any other notfor-profit body with a political, philosophical, religious or trade union aim

o    Processing relates to personal data which are manifestly made public by the data subject

o    Processing is necessary for the establishment, exercise or defence of legal claims or whenever courts are acting in their judicial capacity o Processing is necessary for reasons of substantial public interest

o    Processing is necessary for the purposes of preventive or occupational medicine

o    Processing is necessary for reasons of public interest in the area of public health

o    Processing is necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes

       Notification obligation regarding rectification or erasure of personal data or restriction of processing: o Controller shall communicate any rectification or erasure of personal data or restriction of processing to each recipient to whom the personal data have been disclosed

       Data subject right to erasure personal data (right to be forgotten):

o    Data subject shall have the right to obtain from the controller the erasure of personal data where one of the following grounds applies:

  Personal data are no longer necessary in relation to the purposes for which they were collected or otherwise processed

  Data subject withdraws consent on which the processing and there is no other legal ground for the processing;

  There are no overriding legitimate grounds for the processing

  Personal data have been unlawfully processed

  Personal data have to be erased for compliance with a legal obligation in Union or Member State law to which the controller is subject;

  Personal data have been collected in relation to the offer of information society services o When right to erasure shall not apply:

  For exercising the right of freedom of expression and information

  For compliance with a legal obligation which requires processing to which the controller is subject

  For the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller

  For reasons of public interest in the area of public health

  For archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in so far as the right is likely to render impossible or seriously impair the

achievement of the objectives of that processing         For the establishment, exercise or defence of legal claims

       Eight mandatory obligations of processor:

o    Personal data may be processed by processor only on the basis of documented instructions from the controller (exception may be made for processing on the basis of legislation)

o    It is the responsibility of the processor to ensure that persons processing personal data have undertaken to comply with the confidentiality requirement or are subject to other relevant statutory confidentiality obligations. o The processor is obliged to monitor all technical security measures provided for in the GDPR

o    Processor is allowed to involve other processor, but it involves a number of administrative security requirements

o    Processor must, as far as possible (taking into account the nature of the processing of personal data), assist the controller by taking appropriate technical and organizational measures to fulfill the obligation of the controller to respond to requests by the data subject

o    Processor must assist the controller in complying with the obligations relating to personal data security measures documenting and implementing

o    At the choice of the controller, deletes or returns all the personal data to the controller after the end of the provision of services relating to processing, and deletes existing copies unless Union or Member State law requires storage of the personal data

o    Makes available to the controller all information necessary to demonstrate compliance with the obligations and allow for and contribute to audits, including inspections, conducted by the controller or another auditor mandated by the controller.

       Registration of personal data protection:

o    Controller must keep records of the next seven areas documented:

  Name and contact details of the controller and, where applicable, the joint controller, the controller's representative and the data protection officer

  The purposes of the processing

  Description of the categories of data subjects and of the categories of personal data

  Categories of recipients to whom the personal data have been or will be disclosed including recipients in third countries or international organisations

  Where applicable, transfers of personal data to a third country or an international organisation, including the identification of that third country or international organisation and the documentation of suitable safeguards

  Where possible, the envisaged time limits for erasure of the different categories of data

  Where possible, a general description of the technical and organisational security measures

       Who should desingnate data protection officer?:

o    All public sector authorities/bodies and / or bodies, as well as these who prform public tasks

o    All data processors whose main activity is the regular and systematic monitoring of data subjects on a large scale


o    All procerssors, which duties are:

  processing on a large scale of special categories of data processing

  of personal data relating to criminal convictions and offences

       Who can be data protection officer?:

o    Data protection officer may be:

  data processing officer (position)

  data processing division (department)

  external legal entity (under an outsourcing contract)

       Tasks of data protection officer:

o    To inform and advise the controller or the processor and the employees who carry out processing of their obligations pursuant to GDPR and other protection provisions

o    To monitor compliance with GDPR and with the policies of the controller or processor in relation to the protection of personal data, including the assignment of responsibilities

o    Awareness-raising and training of staff involved in processing operations, and the related audits

o    To provide advice where requested as regards the data protection impact assessment and monitor its performance

o    To cooperate with the data protection supervisory authority o To act as the contact point for the supervisory authority on issues relating to processing, including the prior consultation and to consult, where appropriate, with regard to any other matter.

       (Technical) security of personal data processing:

o    Controller and processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk on four level:

  Pseudonymisation and encryption of personal data

  Ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services

  Ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident

  Process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing

       Notification of personal data breach:

o    In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority

       Data protection impact assessment:

o    The data protection impact assessment has to be carried out on a mandatory basis by the controller in the following three cases:

  For systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing

  For processing on a large scale of special categories of data or of personal data relating to criminal convictions and offences

  For systematic monitoring of a publicly accessible area on a large scale. o Assessment shall contain at least four following items:

  Systematic description of the envisaged processing operations and the purposes of the processing, including, where applicable, the legitimate interest pursued by the controller

  Assessment of the necessity and proportionality of the processing operations in relation to the purposes

  Assessment of the risks to the rights and freedoms of data subjects

  Measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data and to demonstrate compliance with GDPR

       Approved codes of conduct fot typical situations:

o    Drafts of codes for the processing of personal data may be made by everyone, but codes must be approved by the data protection supervisory authority. Then they become official codes of conduct that are public and exemplary for all processors

       Cross-border movement of personal data - general rule:

o    Transmission of personal data to third countries will generally be allowed in those countries or sectors that are allowed in the European Commission and have been designated as regions with sufficient data protection. No permission is required for this

       Seven important aspects of GDPR from the data processor point of view:

o    Determining of personal data and special categories of personal data o Obligations of processor

o    (Personal) data protection impact assessment o Management approval and ongoing information o Reviewing of usable technologies

o    GDPR implementation plan based on threats and risks o Data protection officer and his/her duties

       Personal data protection certification:

o    Certification is optional and will only give you a specific label, seal, or certificate

o    There is no direct legal link to the certificate and it does not. It is likely that good practices will eventually evolve, which one or the other serifific will mean in practice