Tuesday, 3 April 2012

Flash memory


Flash memory is a non-volatile computer storage chip that can be electrically erased and reprogrammed. It was developed from EEPROM (electrically erasable programmable read-only memory) and must be erased in fairly large blocks before these can be rewritten with new data. The high density NAND type must also be programmed and read in (smaller) blocks, or pages, while the NOR type allows a single machine word (byte) to be written or read independently.
The NAND type is primarily used in memory cards, USB flash drives, solid-state drives, and similar products, for general storage and transfer of data. The NOR type, which allows true random access and therefore direct code execution, is used as a replacement for the older EPROM and as an alternative to certain kinds of ROM applications. However, NOR flash memory may emulate ROM primarily at the machine code level; many digital designs need ROM (or PLA) structures for other uses, often at significantly higher speeds than (economical) flash memory may achieve. NAND or NOR flash memory is also often used to store configuration data in numerous digital products, a task previously made possible by EEPROMs or battery-powered static RAM.
Example applications of both types of flash memory include personal computers, PDAs, digital audio players, digital cameras, mobile phones, synthesizers, video games, scientific instrumentation, industrial robotics, medical electronics, and so on. In addition to being non-volatile, flash memory offers fast read access times, as fast as dynamic RAM, although not as fast as static RAM or ROM. Its mechanical shock resistance helps explain its popularity over hard disks in portable devices; as does its high durability, being able to withstand high pressure, temperature, immersion in water etc.
Although flash memory is technically a type of EEPROM, the term "EEPROM" is generally used to refer specifically to non-flash EEPROM which is erasable in small blocks, typically bytes. Because erase cycles are slow, the large block sizes used in flash memory erasing give it a significant speed advantage over old-style EEPROM when writing large amounts of data.[citation needed] Flash memory now costs far less than byte-programmable EEPROM and has become the dominant memory type wherever a significant amount of non-volatile, solid state storage is needed.

Information repository


An information repository is an easy way to deploy a secondary tier of data storage that can comprise multiple, networked data storage technologies running on diverse operating systems, where data that no longer needs to be in primary storage is protected, classified according to captured metadata, processed, de-duplicated, and then purged, automatically, based on data service level objectives and requirements. In information repositories, data storage resources are virtualized as composite storage sets and operate as a federated environment.
Information repositories were developed to mitigate problems arising from data proliferation and eliminate the need for separately deployed data storage solutions because of the concurrent deployment of diverse storage technologies running diverse operating systems. They feature centralized management for all deployed data storage resources. They are self-contained, support heterogeneous storage resources, support resource management to add, maintain, recycle, and terminate media, track of off-line media, and operate autonomously.


Since one of the main reasons for the implementation of an Information repository is to reduce the maintenance workload placed on IT staff by traditional data storage systems, information repositories are automated. Automation is accomplished via polices that can process data based on time, events, data age, and data content. Policies manage the following:
File system space management
Irrelevant data elimination (mp3, games, etc.)
Secondary storage resource management
Data is processed according to media type, storage pool, and storage technology.
Because information repositories are intended to reduce IT staff workload, they are designed to be easy to deploy and offer configuration flexibility, virtually limitless extensibility, redundancy, and reliable failover.

Removable media


In computer storage, removable media refers to storage media which are designed to be removed from the computer without powering the computer off.
Some types of removable media are designed to be read by removable readers and drives. Examples include:
Optical discs (Blu-ray discs, DVDs, CDs)
Memory cards (CompactFlash card, Secure Digital card, Memory Stick)
Floppy disks / Zip disks
Magnetic tapes
Paper data storage (punched cards, punched tapes)
Some removable media readers and drives are integrated into computers, others are themselves removable.
Removable media may also refer to some removable storage devices, when they are used to transport or store data. Examples include:
USB flash drives
External hard disk drives

Dynamic memory allocation


The task of fulfilling an allocation request consists of finding a block of unused memory of sufficient size. Even though this task seems simple, several issues make the implementation complex. One of such problems is internal and external fragmentation, which arises when there are many small gaps between allocated memory blocks, which are insufficient to fulfill the request. Another is that allocator's metadata can inflate the size of (individually) small allocations; this effect can be reduced by chunking.
Usually, memory is allocated from a large pool of unused memory area called the heap (also called the free store). Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually via a pointer reference. The precise algorithm used to organize the memory area and allocate and deallocate chunks is hidden behind an abstract interface and may use any of the methods described below.


The dynamic memory allocation algorithm actually used can impact performance significantly and a study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52 (as measured with an instruction level profiler on a variety of software).


Fixed-size-blocks allocation, also called memory pool allocation, uses a free list of fixed-size blocks of memory (often all of the same size). This works well for simple embedded systems where no large objects need to be allocated, but it suffers from fragmentation.


For more details on this topic, see Buddy memory allocation.
In this system, memory is allocated from several pools of memory instead of just one, where each pool represents blocks of memory of a certain power of two in size. If a smaller size is requested than is available, the smallest available size is selected and it is then broken in two. One of the resulting halves is selected, and the process repeats (checking the size again and splitting if needed) until the block is just large enough. All new blocks that are formed during these splits are added to their respective memory pools for later use.
All the blocks of a particular size are kept in a sorted linked list or tree. When a block is freed, it is compared to its buddy. If they are both free, they are combined and placed in the next-largest size buddy-block list (when a block is allocated, the allocator will start with the smallest sufficiently large block avoiding needlessly breaking blocks).

Memory leak


A memory leak can diminish the performance of the computer by reducing the amount of available memory. Eventually, in the worst case, too much of the available memory may become allocated and all or part of the system or device stops working correctly, the application fails, or the system slows down unacceptably due to thrashing.
Memory leaks may not be serious or even detectable by normal means. In modern operating systems, normal memory used by an application is released when the application terminates. This means that a memory leak in a program that only runs for a short time may not be noticed and is rarely serious.
Leaks that are much more serious include:
Where the program runs for an extended time and consumes additional memory over time, such as background tasks on servers, but especially in embedded devices which may be left running for many years.
Where new memory is allocated frequently for one-time tasks, such as when rendering the frames of a computer game or animated video.
Where the program is able to request memory — such as shared memory — that is not released, even when the program terminates.
Where memory is very limited, such as in an embedded system or portable device.
Where the leak occurs within the operating system or memory manager.
Where the leak is the responsibility of a system device driver.
Where running on an operating systems that does not automatically release memory on program termination. Often on such machines if memory is lost, it can only be reclaimed by a reboot, an example of such a system being AmigaOS.


The following example, written in pseudocode, is intended to show how a memory leak can come about, and its effects, without needing any programming knowledge. The program in this case is part of some very simple software designed to control an elevator. This part of the program is run whenever anyone inside the elevator presses the button for a floor.
When a button is pressed:
  Get some memory, which will be used to remember the floor number
  Put the floor number into the memory
  Are we already on the target floor?
    If so, we have nothing to do: finished
    Otherwise:
      Wait until the lift is idle
      Go to the required floor
      Release the memory we used to remember the floor number
The memory leak would occur if the floor number requested is the same floor that the lift is on; the condition for releasing the memory would be skipped. Each time this case occurs, more memory is leaked.
Cases like this wouldn't usually have any immediate effects. People do not often press the button for the floor they are already on, and in any case, the lift might have enough spare memory that this could happen hundreds or thousands of times. However, the lift will eventually run out of memory. This could take months or years, so it might not be discovered despite thorough testing.
The consequences would be unpleasant; at the very least, the lift would stop responding to requests to move to another floor. If other parts of the program need memory (a part assigned to open and close the door, for example), then someone may be trapped inside, since the software cannot open the door.
The memory leak lasts until the system is reset. For example: if the lift's power were turned off the program would stop running. When power was turned on again, the program would restart and all the memory would be available again, but the slow process of memory leak would restart together with the program, eventually prejudicing the correct running of the system.

Virtual memory


In computing, virtual memory is a memory management technique developed for multitasking kernels. This technique virtualizes a computer architecture's various forms of computer data storage (such as random-access memory and disk storage), allowing a program to be designed as though there is only one kind of memory, "virtual" memory, which behaves like directly addressable read/write memory (RAM).
Most modern operating systems that support virtual memory also run each process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory. However, some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even modern ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory.
Virtual memory makes application programming easier by hiding fragmentation of physical memory; by delegating to the kernel the burden of managing the memory hierarchy (eliminating the need for the program to handle overlays explicitly); and, when each process is run in its own dedicated address space, by obviating the need to relocate program code or to access memory with relative addressing.
Memory virtualization is a generalization of the concept of virtual memory.
Virtual memory is an integral part of a computer architecture; all implementations (excluding[dubious – discuss] emulators and virtual machines) require hardware support, typically in the form of a memory management unit built into the CPU. Consequently, older operating systems, such as those for the mainframes of the 1960s, and those for personal computers of the early to mid 1980s (e.g. DOS), generally have no virtual memory functionality,[dubious – discuss] though notable exceptions for mainframes of the 1960s include:
the Atlas Supervisor for the Atlas
MCP for the Burroughs B5000
TSS/360 and CP/CMS for the IBM System/360 Model 67
Multics for the GE 645
the Time Sharing Operating System for the RCA Spectra 70/46
The Apple Lisa is an example of a personal computer of the 1980s that features virtual memory.
Embedded systems and other special-purpose computer systems that require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism; virtual memory systems trigger unpredictable interrupts that may produce unwanted "jitter" during I/O operations. This is because embedded hardware costs are often kept low by implementing all such operations with software (a technique called bit-banging) rather than with dedicated hardware.

Memory protection


Memory protection is a way to control memory access rights on a computer, and is a part of most modern operating systems. The main purpose of memory protection is to prevent a process from accessing memory that has not been allocated to it. This prevents a bug within a process from affecting other processes, or the operating system itself. Memory protection for computer security includes additional techniques such as address space layout randomization and executable space protection.


Segmentation refers to dividing a computer's memory into segments.
The x86 architecture has multiple segmentation features, which are helpful for using protected memory on this architecture. On the x86 processor architecture, the Global Descriptor Table and Local Descriptor Tables can be used to reference segments in the computer's memory. Pointers to memory segments on x86 processors can also be stored in the processor's segment registers. Initially x86 processors had 4 segment registers, CS (code segment), SS (stack segment), DS (data segment) and ES (extra segment); later another two segment registers were added – FS and GS.


In paging, the memory address space is divided into equal, small pieces, called pages. Using a virtual memory mechanism, each page can be made to reside in any location of the physical memory, or be flagged as being protected. Virtual memory makes it possible to have a linear virtual memory address space and to use it to access blocks fragmented over physical memory address space.
Most computer architectures based on pages, most notably x86 architecture, also use pages for memory protection.
A page table is used for mapping virtual memory to physical memory. The page table is usually invisible to the process. Page tables make it easier to allocate new memory, as each new page can be allocated from anywhere in physical memory.
By such design, it is impossible for an application to access a page that has not been explicitly allocated to it, simply because any memory address, even a completely random one, that application may decide to use, either points to an allocated page, or generates a page fault (PF). Unallocated pages simply do not have any addresses from the application point of view.
As a side note, a PF may not be a fatal occurrence. Page faults are used not only for memory protection, but also in another interesting way: the OS may intercept the PF, and may load a page that has been previously swapped out to disk, and resume execution of the application which had caused the page fault. This way, the application receives the memory page as needed. This scheme, known as swapped virtual memory, allows in-memory data not currently in use to be moved to disk storage and back in a way which is transparent to applications, to increase overall memory capacity.

Page address register

A page address register (PAR) contains the physical addresses of pages currently held in the main memory of a computer system. PARs are used in order to avoid excessive use of an address table in some operating systems. A PAR may check a page's number against all entries in the PAR simultaneously, allowing it to retrieve the pages physical address quickly. A PAR is used by a single process and is only used for pages which are frequently referenced (though these pages may change as the process's behaviour changes in accordance with the principle of locality). An example computer which made use of PARs is the Atlas.

Static random-access memory


Static random-access memory (SRAM) is a type of semiconductor memory where the word static indicates that, unlike dynamic RAM (DRAM), it does not need to be periodically refreshed, as SRAM uses bistable latching circuitry to store each bit. SRAM exhibits data remanence, but is still volatile in the conventional sense that data is eventually lost when the memory is not powered.


Each bit in an SRAM is stored on four transistors that form two cross-coupled inverters. This storage cell has two stable states which are used to denote 0 and 1. Two additional access transistors serve to control the access to a storage cell during read and write operations. A typical SRAM uses six MOSFETs to store each memory bit. In addition to such 6T SRAM, other kinds of SRAM chips use 8T, 10T, or more transistors per bit. This is sometimes used to implement more than one (read and/or write) port, which may be useful in certain types of video memory and register files implemented with multi-ported SRAM circuitry.
Generally, the fewer transistors needed per cell, the smaller each cell can be. Since the cost of processing a silicon wafer is relatively fixed, using smaller cells and so packing more bits on one wafer reduces the cost per bit of memory.
Memory cells that use fewer than 6 transistors are possible — but such 3T or 1T cells are DRAM, not SRAM (even the so-called 1T-SRAM).
Access to the cell is enabled by the word line (WL in figure) which controls the two access transistors M5 and M6 which, in turn, control whether the cell should be connected to the bit lines: BL and BL. They are used to transfer data for both read and write operations. Although it is not strictly necessary to have two bit lines, both the signal and its inverse are typically provided in order to improve noise margins.
During read accesses, the bit lines are actively driven high and low by the inverters in the SRAM cell. This improves SRAM bandwidth compared to DRAMs—in a DRAM, the bit line is connected to storage capacitors and charge sharing causes the bitline to swing upwards or downwards. The symmetric structure of SRAMs also allows for differential signaling, which makes small voltage swings more easily detectable. Another difference with DRAM that contributes to making SRAM faster is that commercial chips accept all address bits at a time. By comparison, commodity DRAMs have the address multiplexed in two halves, i.e. higher bits followed by lower bits, over the same package pins in order to keep their size and cost down.

Stable storage


Stable storage is a classification of computer data storage technology that guarantees atomicity for any given write operation and allows software to be written that is robust against some hardware and power failures. To be considered atomic, upon reading back a just written-to portion of the disk, the storage subsystem must return either the write data or the data that was on that portion of the disk before the write operation.
Most computer disk drives are not considered stable storage because they do not guarantee atomic write; an error could be returned upon subsequent read of the disk where it was just written to in lieu of either the new or prior data.


Multiple techniques have been developed to achieve the atomic property from weakly atomic devices such as disks. Writing data to a disk in two places in a specific way is one technique and can be done by application software.
Most often though, stable storage functionality is achieved by mirroring data on separate disks via RAID technology (level 1 or greater). The RAID controller implements the disk writing algorithms that enable separate disks to act as stable storage. The RAID technique is robust against some single disk failure in an array of disks whereas the software technique of writing to separate areas of the same disk only protects against some kinds of internal disk media failures such as bad sectors in single disk arrangements.

Memory management


Memory management is the act of managing computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. This is critical to the computer system.
Several methods have been devised that increase the effectiveness of memory management. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the effectively available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have a big impact on overall system performance.


The task of fulfilling an allocation request consists of finding a block of unused memory of sufficient size. Even though this task seems simple, several issues make the implementation complex. One of such problems is internal and external fragmentation, which arises when there are many small gaps between allocated memory blocks, which are insufficient to fulfill the request. Another is that allocator's metadata can inflate the size of (individually) small allocations; this effect can be reduced by chunking.
Usually, memory is allocated from a large pool of unused memory area called the heap (also called the free store). Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually via a pointer reference. The precise algorithm used to organize the memory area and allocate and deallocate chunks is hidden behind an abstract interface and may use any of the methods described below.


The dynamic memory allocation algorithm actually used can impact performance significantly and a study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52 (as measured with an instruction level profiler on a variety of software).

Mass storage


In computing, mass storage refers to the storage of large amounts of data in a persisting and machine-readable fashion. Devices and/or systems that have been described as mass storage include tape libraries, RAID systems, hard disk drives, magnetic tape drives, optical disc drives, magneto-optical disc drives, drum memory (historic), floppy disk drives (historic), punched tape (historic) and holographic memory (experimental). Mass storage includes devices with removable and non-removable media. It does not include random access memory (RAM), which is volatile in that it loses its contents after power loss.
The notion of "large" amounts of data is of course highly dependent on the time frame and the market segment, as mass storage device capacity has increased by many orders of magnitude since the beginnings of computer technology in the late 1940s and continues to grow; however, in any time frame, common mass storage devices have tended to be much larger and at the same time much slower than common realizations of the contemporaneous primary storage technology. The term mass storage was used in the PC marketplace for devices far smaller than devices that were not considered mass storage in the mainframe marketplace.
Mass storage devices are characterized by:
Sustainable transfer speed
Seek time
Cost
Capacity
Today, magnetic disks are the predominant storage media in personal computers. Optical discs, however, are almost exclusively used in the large-scale distribution of retail software, music and movies because of the cost and manufacturing efficiency of the molding process used to produce DVD and compact discs and the nearly-universal presence of reader drives in personal computers and consumer appliances. Flash memory (in particular, NAND flash) has an established and growing niche as a replacement for magnetic hard disks in high performance enterprise computing installations because it has no moving parts (making it more robust) and has a much lower latency; as removable storage such as USB sticks, because in lower capacity ranges it can be made smaller and cheaper than hard disks; and on portable devices such as notebook computers and cell phones because of its lower size and weight, better tolerance of physical stress caused by e.g. shaking or dropping, and low power consumption.
The design of computer architectures and operating systems are often dictated by the mass storage and bus technology of their time. Desktop operating systems such as Windows are now so closely tied to the performance characteristics of magnetic disks that it is difficult to deploy them on other media like flash memory without running into space constraints, suffering serious performance problems or breaking applications.

CAS latency


Column Address Strobe (CAS) latency, or CL, is the delay time between the moment a memory controller tells the memory module to access a particular memory column on a RAM memory module, and the moment the data from given array location is available on the module's output pins. In general, the lower the CAS latency, the better.
In asynchronous DRAM, the interval is specified in nanoseconds. In synchronous DRAM, the interval is specified in clock cycles. Because the latency is dependent upon a number of clock ticks instead of an arbitrary time, the actual time for an SDRAM module to respond to a CAS event might vary between uses of the same module if the clock rate differs.


Dynamic RAM is arranged in a rectangular array. Each row is selected by a horizontal word line. Sending a logical high signal along a given row enables the MOSFETs present in that row, connecting each storage capacitor to its corresponding vertical bit line. Each bit line is connected to a sense amplifier which amplifies the small voltage change produced by the storage capacitor. This amplified signal is then output from the DRAM chip as well as driven back up the bit line to refresh the row.
When no word line is active, the array is idle and the bit lines are held in a precharged state, with a voltage halfway between high and low. This indeterminate signal is deflected towards high or low by the storage capacitor when a row is made active.
To access memory, a row must first be selected and loaded into the sense amplifiers. This row is then active and columns may be accessed for read or write.
The CAS latency is the delay between the time at which the column address and the column address strobe signal are presented to the memory module and the time at which the corresponding data is made available by the memory module. The desired row must already be active; if it is not, additional time is required.
As an example, a typical 1 GiB SDRAM memory module might contain eight separate one-gibibit DRAM chips, each offering 128 MiB of storage space. Each chip is divided internally into eight banks of 227=128 Mibits, each of which comprises a separate DRAM array. Each array contains 214=16384 rows of 213=8192 bits each. One byte of memory (from each chip; 64 bits total from the whole DIMM) is accessed by supplying a 3-bit bank number, a 14-bit row address, and a 10-bit column address.

Dynamic random-access memory


Dynamic random-access memory (DRAM) is a type of random-access memory that stores each bit of data in a separate capacitor within an integrated circuit. The capacitor can be either charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. Since capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.
The main memory (the "RAM") in personal computers is Dynamic RAM (DRAM). It is the RAM in laptop and workstation computers as well as some of the RAM of video game consoles.
The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities. Unlike flash memory, DRAM is volatile memory (cf. non-volatile memory), since it loses its data quickly when power is removed. The transistors and capacitors used are extremely small; billions can fit on a single memory chip.

Data Retention Directive


Data Retention Directive, more formally "Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006 on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks and amending Directive 2002/58/EC" is a Directive issued by the European Union and relates to Telecommunications data retention. According to the directive, member states will have to store citizens' telecommunications data for six to 24 months stipulating a maximum time period. Under the directive the police and security agencies will be able to request access to details such as IP address and time of use of every email, phone call and text message sent or received. A permission to access the information will be granted only by a court.


The Data Retention Directive has sparked serious concerns from physicians, journalists, privacy and human rights groups, unions, IT security firms and legal experts.

Computer data storage


Computer data storage, often called storage or memory, refers to computer components and recording media that retain digital data. Data storage is a core function and fundamental component of computers.
In contemporary usage, 'memory' usually refers to semiconductor storage read-write random-access memory, typically DRAM (Dynamic-RAM). Memory can refer to other forms of fast but temporary storage. Storage refers to storage devices and their media not directly accessible by the CPU, (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but are non-volatile (retaining contents when powered down). Historically, memory has been called core, main memory, real storage or internal memory while storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
The distinctions are fundamental to the architecture of computers. The distinctions also reflect an important and significant technical difference between memory and mass storage devices, which has been blurred by the historical usage of the term storage. Nevertheless, this article uses the traditional nomenclature.
Many different forms of storage, based on various natural phenomena, have been invented. So far, no practical universal storage medium exists, and all forms of storage have some drawbacks. Therefore a computer system usually contains several kinds of storage, each with an individual purpose.
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (forty million bits) with one byte per character.
The defining component of a computer is the central processing unit (CPU, or simply processor), because it operates on data, performs calculations (computes), and controls other components. In the most commonly used computer architecture, the CPU consists of two main parts: Control Unit and Arithmetic Logic Unit (ALU). The former controls the flow of data between the CPU and memory; the latter performs arithmetic and logical operations on data.
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialised devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.

Early case assessment


Early case assessment refers to estimating risk (cost of time and money) to prosecute or defend a legal case. Global organizations deal with legal discovery and disclosure requests for electronically stored information "ESI" and paper documents on a regular basis.
Over 90% of all cases settle prior to trial. Oftentimes an organization will spend significant time and money on a case only to find they want to settle for whatever reason. Legal discovery costs are usually the most burdensome financially to both plaintiff and defendant. Often, and during cases in the United States, an opposing party will strategize on how to make it as difficult as possible for you to comply with the discovery process, including time and cost to respond to discovery requests. Because of this, organizations have a continued need to conduct early case assessment to determine their risks and benefits of taking a case to trial without painful settlement discussions.
Many service organizations, law firms, and corporations refer to early case assessment differently. Consultants hired by the corporation or law firm on a case manage cases on a risk basis. There also exist a number of software tools that assist in and help facilitate the process of early case assessment. Effective early case assessment might require the combination of professional expertise and software. This pairing, depending on the professional and tools used can provide various degrees of early case assessment review. Early case assessment, as a managed process, often requires customization to each case and the client involved.
The early case assessment lifecycle will typically include all of the following:
Perform a risk-benefit analysis.
Place and manage a legal hold on potentially responsive documents (paper and ESI) in appropriate countries.
Preserve information abroad.
Gather relevant information for attorney and expert document review.
Process potentially relevant information for purposes of filtering, search term, or data analytics.
Information hosting for attorney and expert document review, commenting, redaction.
Produce documents to parties in the case.
Reuse information in future cases.
Early case assessment software is typically used by attorneys, corporate legal departments, risk managers, forensics teams, IT professionals and independent consultants to help them analyze unstructured electronically stored information.
The software approach to early case assessment typically includes the following:
Determine the source files to analyze.
Point the analysis tool to the files to be analyzed.
Set parameters for the assessment.
Allow the program to automatically scan and assess the data, which may be located on local hard drives, removable media, file servers, whole networks, etc.)
Review reports generated by the software.

Information governance


Information governance, or IG, is an emerging term used to encompass the set of multi-disciplinary structures, policies, procedures, processes and controls implemented to manage information at an enterprise level, supporting an organization's immediate and future regulatory, legal, risk, environmental and operational requirements.


Because information governance is a relatively new concept, there is no standard definition as of yet. Gartner Inc., an information technology research and advisory firm, defines information governance as the specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards and metrics that ensure the effective and efficient use of information in enabling an organization to achieve its goals. 
As defined by information governance solutions provider RSD S.A., IG enforces desirable behavior for the creation, use, archiving, and deletion of corporate information. 
To technology and consulting corporation IBM, information governance is a holistic approach to managing and leveraging information for business benefits and encompasses information quality, information protection and information life cycle management. 
Regardless of the exact wording, definitions of IG tend to go quite a bit further than traditional Records management in order to address all phases of the information life cycle. It incorporates privacy attributes, electronic discovery requirements, storage optimization, and metadata management. In essence, information government is the superset encompassing each of these elements.


Records management deals with the retention and disposition of records. A record can either be a physical, tangible object, or digital information such as a database, application data, and e-mail. The lifecycle was historically viewed as the point of creation to the eventual disposal of a record. As data generation exploded in recent decades, and regulations and compliance issues increased, traditional records management failed to keep pace. A more comprehensive platform for managing records and information became necessary to address all phases of the lifecycle, which led to the advent of information governance. 
In 2003 the Department of Health in England introduced the concept of broad based information governance into the National Health Service, publishing version 1 of an online performance assessment tool with supporting guidance. The NHS IG Toolkit is now used by over 30,000 NHS and partner organisations, supported by an e-learning platform with some 650,000 users.
In 2008, ARMA International  introduced the Generally Accepted Recordkeeping Principles®, or GARP® and the subsequent GARP® Information Goverance Maturity Model. The GARP® principles identify the critical hallmarks of information governance. As such, they apply to all sizes of organizations, in all types of industries, and in both the private and public sectors. Multi-national organizations can also use GARP® to establish consistent practices across a variety of business units. ARMA International recognized that a clear statement of "Generally Accepted Recordkeeping Principles®" (GARP®) would guide:
CEOs in determining how to protect their organizations in the use of information assets;
Legislators in crafting legislation meant to hold organizations accountable; and
Records management professionals in designing comprehensive and effective records management programs.
Information governance goes beyond retention and disposition to include privacy, access controls, and other compliance issues. In electronic discovery, or e-discovery, electronically stored information is searched for relevant data by attorneys and placed on legal hold. IG includes consideration of how this data is held and controlled for e-discovery, and also provides a platform for defensible disposition and compliance. Additionally, metadata often accompanies electronically stored data and can be of great value to the enterprise if stored and managed correctly. 
In 2011, the Electronic Discovery Reference Model (EDRM) — in collaboration with ARMA International — published a white paper that describes How the Information Governance Reference Model (IGRM) Complements ARMA International’s Generally Accepted Recordkeeping Principles (GARP®) 
With all of these additional considerations that go beyond traditional records management, IG emerged as a platform for organizations to define policies at the enterprise level, across multiple jurisdictions. IG then also provides for the enforcement of these policies into the various repositories of information, data, and records.

Electronic Order of Battle


Generating an Electronic order of battle (EOB) requires identifying SIGINT emitters in an area of interest, determining their geographic location or range of mobility, characterizing their signals, and, where possible, determining their role in the broader organizational order of battle. EOB covers both COMINT and ELINT. The Defense Intelligence Agency maintains an EOB by location. The Joint Spectrum Center (JSC) of the Defense Information Systems Agency supplements this location database with five more technical databases:
FRRS: Frequency Resource Record System
BEI: Background Environment Information
SCS: Spectrum Certification System
EC/S: Equipment Characteristics/Space
TACDB: platform lists, sorted by nomenclature, which contain links to the C-E equipment complement of each platform, with links to the parametric data for each piece of equipment, military unit lists and their subordinate units with equipment used by each unit.




EOB and related data flow
For example, several voice transmitters might be identified as the command net (i.e., top commander and direct reports) in a tank battalion or tank-heavy task force. Another set of transmitters might identify the logistic net for that same unit. An inventory of ELINT sources might identify the medium- and long-range counter-artillery radars in a given area,
Signals intelligence units will identify changes in the EOB, which might indicate enemy unit movement, changes in command relationships, and increases or decreases in capability.
Using the COMINT gathering method enables the intelligence officer to produce an electronic order of battle by traffic analysis and content analysis among several enemy units. For example, if the following messages were intercepted:
U1 from U2, requesting permission to proceed to checkpoint X.
U2 from U1, approved. please report at arrival.
(20 minutes later) U1 from U2, all vehicles have arrived to checkpoint X.
This sequence shows that there are two units in the battlefield, unit 1 is mobile, while unit 2 is in a higher hierarchical level, perhaps a command post. One can also understand that unit 1 moved from one point to another which are distant from each 20 minutes with a vehicle. If these are regular reports over a period of time, they might reveal a patrol pattern. Direction-finding and radiofrequency MASINT could help confirm that the traffic is not deception.
The EOB buildup process is divided as following:
Signal separation
Measurements optimization
Data Fusion
Networks build-up
Separation of the intercepted spectrum and the signals intercepted from each sensors must take place in an extremely small period of time, in order to separate the deferent signals to different transmitters in the battlefield. The complexity of the separation process depends on the complexity of the transmission methods (e.g., hopping or time division multiple access (TDMA)).
By gathering and clustering data from each sensor, the measurements of the direction of signals can be optimized and get much more accurate then the basic measurements of a standard direction finding sensor. By calculating larger samples of the sensor's output data in near real-time, together with historical information of signals, better results are achieved.
Data fusion correlates data samples from different frequencies from the same sensor, "same" being confirmed by direction finding or radiofrequency MASINT. If an emitter is mobile, direction finding, other than discovering a repetitive pattern of movement, is of limited value in determining if a sensor is unique. MASINT then becomes more informative, as individual transmitters and antennas may have unique side lobes, unintentional radiation, pulse timing, etc.
Network build-up among between each emitter (communication transmitter) to another enables creation of the communications flows of a battlefield.

Electronic discovery


Electronic discovery (or e-discovery, eDiscovery) refers to discovery in civil litigation which deals with the exchange of information in electronic format (often referred to as electronically stored information or ESI). Usually (but not always) a digital forensics analysis is performed to recover evidence. A wider array of people are involved in eDiscovery (for example, forensic investigators, lawyers and IT managers) leading to problems with confusing terminology.
Data are identified as relevant by attorneys and placed on legal hold. Evidence is then extracted and analysed using digital forensic procedures, and is usually converted into PDF or TIFF form for use in court.
Electronic information is considered different from paper information because of its intangible form, volume, transience and persistence. Electronic information is usually accompanied by metadata that is not found in paper documents and that can play an important part as evidence (for example the date and time a document was written could be useful in a copyright case). The preservation of metadata from electronic documents creates special challenges to prevent spoliation. Electronic discovery was the subject of amendments to the Federal Rules of Civil Procedure (FRCP), effective December 1, 2006, as amended to December 1, 2010.
Individuals working in the field of electronic discovery commonly refer to the field as Litigation Support.


Examples of the types of data included in e-discovery are e-mail, instant messaging chats, documents, accounting databases, CAD/CAM files, Web sites, and any other electronically stored information that could be relevant evidence in a law suit. Also included in e-discovery is "raw data", which Forensic Investigators can review for hidden evidence. The original file format is known as the "native" format. Litigators may review material from e-discovery in one of several formats: printed paper, "native file,", PDF format, or as single- or multi-page TIFF images.
[edit]Electronic messages
Quite often, discovery evidence is either delayed or never produced, many times because of the inaccessibility of the data. For example, backup tapes cannot be found, or are erased and reused[citation needed].
This kind of situation reached its apex during the Zubulake v. UBS Warburg LLC lawsuit. Throughout the case, the plaintiff claimed that the evidence needed to prove the case existed in emails stored on UBS' own computer systems. Because the emails requested were either never found or destroyed, the court found that it was more likely that they existed than not. The court found that while the corporation's counsel directed that all potential discovery evidence, including emails, be preserved, the staff that the directive applied to did not follow through. This resulted in significant sanctions against UBS.
In 2006, the U.S. Supreme Court's amendments to the Federal Rules of Civil Procedure created a category for electronic records that, for the first time, explicitly named emails and instant message chats as likely records to be archived and produced when relevant. The rapid adoption of instant messaging as a business communications medium during the period 2005-2007 has made IM as ubiquitous in the workplace as email and created the need for companies to address archiving and retrieval of IM chats to the same extent they do for email.
With electronic message archiving in place for both email and IM it becomes a fairly simple task to retrieve any email or IM chat that might be used in e-discovery. Some archiving systems apply a unique code to each archived message or chat to establish authenticity. The systems prevent alterations to original messages, messages cannot be deleted, and the messages cannot be accessed by unauthorized persons.
Also important to complying with discovery of electronic records is the requirement that records be produced in a timely manner. The changes to the Federal Rules of Civil Procedure were the culmination of a period of debate and review that started in March 2000 when then Vice President Al Gore’s fundraising activities were being probed by the United States Department of Justice. After White House counsel Beth Norton reported that it would take up to six months to search through 625 storage tapes, efforts began to mandate timelier discovery of electronic records.

Telecommunications data retention


In the field of telecommunications, data retention (or data preservation) generally refers to the storage of call detail records (CDRs) of telephony and internet traffic and transaction data (IPDRs) by governments and commercial organisations. In the case of government data retention, the data that is stored is usually of telephone calls made and received, emails sent and received and web sites visited. Location data is also collected.
The primary objective in government data retention is traffic analysis and mass surveillance. By analysing the retained data, governments can identify the locations of individuals, an individual's associates and the members of a group such as political opponents. These activities may or may not be lawful, depending on the constitutions and laws of each country. In many jurisdictions access to these databases may be made by a government with little or no judicial oversight (e.g. USA, UK, Australia).
In the case of commercial data retention, the data retained will usually be on transactions and web sites visited.
Data retention also covers data collected by other means (e.g. by automatic numberplate recognition systems) and held by government and commercial organisations.

Secrecy of correspondence


The secrecy of correspondence (German: Briefgeheimnis, French: secret de la correspondance, Swedish: brevhemlighet, Finnish: kirjesalaisuus, Hungarian: Levéltitok, Polish: Tajemnica korespondencji) or literally translated as secrecy of letters, is a fundamental legal principle enshrined in the constitutions of several European countries. It guarantees that the content of sealed letters is never revealed and letters in transit are not opened by government officials or any other third party. It is thus the main legal basis for the assumption of privacy of correspondence.
The principle has been naturally extended to other forms of communication, including telephony and electronic communications on the Internet as the constitutional guarantees are generally thought to cover also these forms of communication. However, national telecommunications privacy laws may allow lawful interception, i.e. wiretapping and monitoring of electronic communications in cases of suspicion of crime. Paper letters have in most jurisdictions remained outside the legal scope of law enforcement surveillance, even in cases of "reasonable searches and seizures".
When applied to electronic communication, the principle protects not only the content of the communication, but also the information on when and to whom any messages (if any) have been sent (see: Call detail records), and in the case of mobile communication, the location information of the mobile units. As a consequence in jurisdictions with a safeguard on secrecy of letters location data collected from mobile phone networks has a higher level of protection than data collected by vehicle telematics or transport tickets.


In the United States there is no specific constitutional guarantee on the privacy of correspondence. The secrecy of letters and correspondence is derived through litigation from the Fourth Amendment to the United States Constitution. In an 1877 case the U.S. Supreme Court stated:
No law of Congress can place in the hands of officials connected with the Postal Service any authority to invade the secrecy of letters and such sealed packages in the mail; and all regulations adopted as to mail matter of this kind must be in subordination to the great principle embodied in the fourth amendment of the Constitution.
The protection of the Fourth Amendment has been extended beyond the home in other instances. A protection similar to that of correspondence has even been argued to extend to the contents of trash cans outside one's house, although unsuccessfully. Like all rights derived through litigation, the secrecy of correspondence is subject to interpretations. Rights derived from the Fourth Amendment are limited by the legal requirement of a "reasonable expectation of privacy".

Expectation of privacy


In United States constitutional law the expectation of privacy is a legal test which is crucial in defining the scope of the applicability of the privacy protections of the Fourth Amendment to the United States Constitution. It is related to, but is not the same thing as a right of privacy, a much broader concept which is found in many legal systems (see privacy law).
There are two types of expectations of privacy:
A subjective expectation of privacy is an opinion of a person that a certain location or situation is private. These obviously vary greatly from person to person.
An objective, legitimate or reasonable expectation of privacy is an expectation of privacy generally recognized by society.
Examples of places where a person has a reasonable expectation of privacy are person's residence or hotel room and public places which have been specifically provided by businesses or the public sector to ensure privacy, such as public restrooms, private portions of jailhouses, or a phone booth.
In general, one cannot have a reasonable expectation of privacy in things held out to the public. A well-known example is that there is no privacy rights in garbage left for collection in a public place. Other examples include: account records held by the bank, a person's physical characteristics (including blood, hair, fingerprints, fingernails and the sound of your voice), what the naked eye can see below in public air space (without the use of special equipment), anything in open fields (eg. barn), odors emanating from your car or luggage and paint scrapings on the outside of your car.
While a person may have a subjective expectation of privacy in his/her car, it is not always an objective one, unlike a person's home.
The privacy laws of the United States include the notion of a person's "open fields"; that is, places where a person's possessions do not have an objective expectation of privacy.

Copyright infringement

Copyright infringement is the unauthorized use of works under copyright, infringing the copyright holder's "exclusive rights", such as the right to reproduce or perform the copyrighted work, spread the information contained within copyrighted works, or to make derivative works. It often refers to copying "intellectual property" without written permission from the copyright holder, which is typically a publisher or other business representing or assigned by the work's creator.


Internet intermediaries were formerly understood to be internet service providers (ISPs). However, questions of liability have also emerged in relation to other Internet infrastructure intermediaries, including Internet backbone providers, cable companies and mobile communications providers.
In addition, intermediaries are now also generally understood to include Internet portals, software and games providers, those providing virtual information such as interactive forums and comment facilities with or without a moderation system, aggregators, universities, libraries and archives, web search engines, chat rooms, web blogs, mailing lists, and any website which provides access to third party content through, for example, hyperlinks, a crucial element of the World Wide Web.


Early court cases focused on the liability of Internet service providers (ISPs) for hosting, transmitting or publishing user-supplied content that could be actioned under civil or criminal law, such as libel, defamation, or pornography. As different content was considered in different legal systems, and in the absence of common definitions for "ISPs," "bulletin boards" or "online publishers," early law on online intermediaries' liability varies widely from country to country. The first laws on online intermediaries' liability were passed from the mid 1990s onwards.
The debate has shifted away from questions about liability for specific content, including that which may infringe copyright, towards whether online intermediaries should be generally responsible for content accessible through their services or infrastructure.
The U.S. Digital Millennium Copyright Act (1998) and the European E-Commerce Directive (2000) provide online intermediaries with limited statutory immunity from liability for copyright infringement. Online intermediaries hosting content that infringes copyright are not liable, so long as they do not know about it and take actions once the infringing content is brought to their attention. In U.S. law this is characterized as "safe harbor" provisions, and in European law as the "mere conduit" principle.

Search and seizure


Search and seizure is a legal procedure used in many civil law and common law legal systems whereby police or other authorities and their agents, who suspect that a crime has been committed, do a search of a person's property and confiscate any relevant evidence to the crime.
Some countries have provisions in their constitutions that provide the public with the right to be free from "unreasonable" search and seizure. This right is generally based on the premise that everyone is entitled to a reasonable right to privacy.
Though interpretation may vary, this right sometimes requires law enforcement to obtain a search warrant before engaging in any form of search and seizure. In cases where evidence is seized in a search, that evidence might be rejected by court procedures, such as with a motion to suppress the evidence under the exclusionary rule.


In corporate and administrative law there has been an evolution of Supreme Court interpretation in favor of stronger government in regards to investigatory power. In the Supreme Court case Federal Trade Commission v. American Tobacco Co,, the federal court ruled that the FTC, while having been granted a broad subpoena power, did not have the right to a general "fishing expedition" into the private papers, to search both relevant and irrelevant, hoping that something would come up. Justice Holmes ruled that this would go against "the spirit and the letter" of the Fourth Amendment.
Later, in the 1946 Oklahoma Press Pub. Co. v. Walling,, there was a distinction made between a "figurative or constructive search" and an actual search and seizure. The court held that constructive searches are limited by the Fourth Amendment, where actual search and seizure requires a warrant based on “probable cause”. In the case of a constructive search where the records and papers sought are of corporate character, the court held that the Fourth Amendment does not apply, since corporations are not entitled to all the constitutional protections created in order to protect the rights of private individuals.

Fight Against Coercive Tactics Network


Fight Against Coercive Tactics Network, also known as FACTNet, co-founded by Robert Penny and Lawrence Wollersheim, is a Colorado-based organization committed to educating and facilitating communication about destructive mind control. Coercive tactics, or "coercive psychological systems", are defined on their website as "unethical mind control such as brainwashing, thought reform, destructive persuasion and coercive persuasion". While this appears to cover a massive array of issues, in practice FACTNet's primary dedication is to the exposure and disruption of cult activity.


Legal cases involving the organization and the Religious Technology Center are cited in analysis of fair use law. The book Internet and Online Law noted that "reproduction in computer format of plaintiff's entire copyrighted texts for defendants' private use and study falls well within the fair use exception." The work Cyber Rights: Defending Free Speech in the Digital Age characterized FACTNet as part of the "publishers and posters" group, when analyzing Scientology related legal cases in the chapter: "The Battle over Copyright on the Net." The author also placed Dennis Erlich and Arnie Lerma in this classification while analyzing actions taken by the Church of Scientology, which the author calls a "famously litigious organization."


The FACTNet newsletter is described in the book Project Censored Guide to Independent Media and Activism as: "the oldest and largest cult and mind control resource on the internet." The organization is also cited as a resource by Flo Conway and Jim Siegelman in the 1995 edition of their book Snapping: America's Epidemic of Sudden Personality Change. The book California by Andrea Schulte-Peevers asks readers to consult FACTNet and draw their own conclusions about whether Scientology is a "Mind-control cult, trendy fad or true religion." The St. Petersburg Times described the site as an: "Anti-cult site that focuses on Scientology and its legal battles." The Washington Post noted that the site contains "several books and thousands of pages of documents relating to Scientology." Web sites of groups followed by FACTnet are grouped on the site next to those of related watchdog organizations and critical sites. In a piece on the company Landmark Education, The Boston Globe noted that FACTnet listed the group in its database of: "cults, groups and individuals that are alleged to be using coercive persuasion mind control techniques," though the organization has a history of suing those that refer to it as a "cult."

Scientology controversies


Since the Church of Scientology's inception in 1954, numerous Scientologists have been involved in scandals, at times serving prison sentences for crimes, such as those committed in Operation Snow White. When mainstream media outlets have reported alleged abuses, however, representatives of the church have tended to respond by counterattack, blaming the allegations on critics with an alleged agenda to misrepresent the organizations's intentions. Many critics have called into question several of the practices and policies that the Scientology organization has in place in regards to its dealings with its critics and detractors.


The church maintains strict control over the use of its symbols, names and religious texts. It holds copyright and trademark ownership over its cross and has taken legal action against individuals and organizations that have quoted short paragraphs of Scientology texts in print or on web sites. Individuals or groups who practice Scientology without affiliation with the church have been sued for violation of copyright and trademark law.
Although U.S. intellectual property law allows for "fair use" of material for commentary, parody, educational purposes, etc., critics of the church such as Gerry Armstrong have argued that the church unfairly and illegally uses the legal system to suppress "fair" uses.
One example cited by critics is a 1995 lawsuit against the Washington Post newspaper et al.. The Religious Technology Center (RTC), the corporation that controls L. Ron Hubbard's copyrighted materials, sued to prevent a Post reporter from describing church teachings at the center of another lawsuit, claiming copyright infringement, trade secret misappropriation, and that the circulation of their "advanced technology" teachings would cause "devastating, cataclysmic spiritual harm" to those not prepared. In her judgment in favor of the Post, Judge Leonie Brinkema noted:
When the RTC first approached the Court with its ex parte request for the seizure warrant and Temporary Restraining Order, the dispute was presented as a straight-forward one under copyright and trade secret law. However, the Court is now convinced that the primary motivation of RTC in suing Lerma, DGS and the Post is to stifle criticism of Scientology in general and to harass its critics. As the increasingly vitriolic rhetoric of its briefs and oral argument now demonstrates, the RTC appears far more concerned about criticism of Scientology than vindication of its secrets.
—U.S. District Judge Leonie Brinkema, Religious Technology Center v. Arnaldo Lerma, Washington Post, Mark Fisher, 

Streisand effect


Streisand effect is a primarily online phenomenon in which an attempt to hide or remove a piece of information has the unintended consequence of publicizing the information more widely. It is named after American entertainer Barbra Streisand, whose attempt in 2003 to suppress photographs of her residence inadvertently generated further publicity.
Similar attempts have been made, for example, in cease-and-desist letters, to suppress numbers, files and websites. Instead of being suppressed, the information receives extensive publicity and media extensions such as videos & spoof songs, often being widely mirrored across the Internet or distributed on file-sharing networks.
Mike Masnick of Techdirt coined the term after Streisand, citing privacy violations, unsuccessfully sued photographer Kenneth Adelman and Pictopia.com for US$50 million in an attempt to have an aerial photograph of her mansion removed from the publicly available collection of 12,000 California coastline photographs. Adelman said that he was photographing beachfront property to document coastal erosion as part of the government sanctioned and commissioned California Coastal Records Project. Before Streisand filed her lawsuit, "Image 3850" had been downloaded from Adelman's website only 6 times; two of those downloads were by Streisand's attorneys. As a result of the case, public knowledge of the picture increased substantially; more than 420,000 people visited the site over the following month.

LulzSec


Lulz Security, commonly abbreviated as LulzSec, was a computer hacker group that claimed responsibility for several high profile attacks, including the compromise of user accounts from Sony Pictures in 2011. The group also claimed responsibility for taking the CIA website offline. Some security professionals have commented that LulzSec has drawn attention to insecure systems and the dangers of password reuse. It has gained attention due to its high profile targets and the sarcastic messages it has posted in the aftermath of its attacks. One of the founders of LulzSec was a computer security specialist that used the online moniker Sabu. The man accused of being Sabu has helped law enforcement track down other members of the organization as part of a plea deal. At least four associates of LulzSec were arrested in March 2012 as part of this investigation. British authorities had previously announced the arrests of two teenagers they allege are LulzSec members T-flow and Topiary.
At just after midnight (BST, UT+01) on 26 June 2011, LulzSec released a "50 days of lulz" statement, which they claimed to be their final release, confirming that LulzSec consisted of seven members, and that their website is to be taken down. This breaking up of the group was unexpected. The release included accounts and passwords from many different sources. Despite claims of retirement, the group committed another hack against newspapers owned by News Corporation on 18 July, defacing them with false reports regarding the death of Rupert Murdoch. The group helped launch Operation AntiSec, a joint effort involving LulzSec, Anonymous, and other hackers.


The group's first attacks came in May 2011. Their first recorded target was Fox.com, which they retaliated against after they called Common, a rapper and entertainer, "vile" on the Fox News Channel. They leaked several passwords, LinkedIn profiles, and the names of 73,000 X Factor contestants. Soon after on 15 May, they released the transaction logs of 3,100 Automated Teller Machines in the United Kingdom. In May 2011, members of Lulz Security gained international attention for hacking into the American Public Broadcasting System (PBS) website. They stole user data and posted a fake story on the site which claimed that Tupac Shakur and Biggie Smalls were still alive and living in New Zealand. In the aftermath of the attack, CNN referred to the responsible group as the "Lulz Boat".
Lulz Security claimed that some of its hacks, including its attack on PBS, were motivated by a desire to defend WikiLeaks and Bradley Manning. A Fox News report on the group quoted one commentator, Brandon Pike, who claimed that Lulz Security is affiliated with the hacktivist group Anonymous. Lulz Security claimed that Pike had actually hired it to hack PBS. Pike denied the accusation and claims it was leveled against him because he said Lulz Security was a splinter of Anonymous.
In June 2011, members of the group claimed responsibility for an attack against Sony Pictures that took data that included "names, passwords, e-mail addresses, home addresses and dates of birth for thousands of people." The group claimed that it used a SQL injection attack, and was motivated by Sony's legal action against George Hotz for jailbreaking into the PlayStation 3. The group claims it will launch an attack that will be the "beginning of the end" for Sony. Some of the compromised user information has since been used in scams. The group claimed to have compromised over 1,000,000 accounts, though Sony claims the real number was around 37,500.

Operation Titstorm


February 2010 Australian cyberattacks were a series of denial-of-service attacks conducted by the anonymous online community against the Australian government in response to proposed web censorship regulations. Operation Titstorm was the name given to the cyber attacks by the perpetrators. They resulted in lapses of access to government websites on the 10th and 11th of February 2010. This was accompanied by emails, faxes, and phone calls harassing government offices. The actual size of the attack and number of perpetrators involved is unknown, but it was estimated that the number of systems involved ranged from the hundreds to the thousands. The amount of traffic caused disruption on multiple government websites.
Australian Telecommunications Minister Stephen Conroy was the proposer of the proposed regulations that would mainly filter sites with pornographic content. Various groups advocating uncensored access to the Internet, along with web-based companies such as Google and Yahoo!, object to the proposed filter. A spokesperson for Conroy said that the actions were not a legitimate form of protest and called it irresponsible. The attacks also drew criticism from other filter protest groups. The initial stage was followed by small in-person protests on 20 February that were called "Project Freeweb".


A spokeswoman for Conroy said such attacks were not a legitimate political protest. According to her, they were "totally irresponsible and potentially deny services to the Australian public". The Systems Administrators Guild of Australia said that it "condemned DoS attacks as the wrong way to express disagreement with the proposed law." Anti-censorship groups criticised the attacks, saying they hurt their cause. A purported spokesperson for the attackers recommended that the wider Australian public protest the filter by signing the petition of Electronic Frontiers Australia.
Anonymous coordinated a second phase with small protests outside the Parliament House in Canberra and in major cities throughout Australia on 20 February. Additional demonstrations were held at some of the country's embassies overseas. The organizers called the follow-up protests "Project Freeweb" to differentiate them from the criticised cyber attacks.
Several supporters of the attack later said on a messageboard that taking down websites was not enough to convince the government to back down on the web filtering policy and called for violence. Others disagreed with such actions and proposed launching an additional attack on a popular government site. A spokesman for Electronic Frontiers Australia said he believed there was no real intention or capacity to follow through with any of the violent threats.
The attack also resulted in criticism of Australia's terrorism laws from the The University of New South Wales Law Journal. One writer wrote that the provisions leave "no place for legitimate acts of online protest, or at least sets the penalty far too high for relatively minor cybervandalism.
An Australian teenager was charged with four counts of inciting other hackers to impair electronic communications and two of unauthorised access to restricted data for his role in the attack. He was ordered to pay a bond instead of being convicted after pleading guilty and showing good behaviour.

Operation Avenge Assange


In December 2010, WikiLeaks came under intense pressure to stop publishing secret United States diplomatic cables. Corporations such as Amazon, PayPal, BankAmerica, PostFinance, MasterCard and Visa either stopped working with or froze donations to WikiLeaks due to political pressures. In response, those behind Operation Payback directed their activities against these companies for dropping support to WikiLeaks. Operation Payback launched DDoS attacks against PayPal, the Swiss bank PostFinance and the Swedish Prosecution Authority. On 8 December 2010, a coordinated DDoS attack by Operation Payback brought down both the MasterCard and Visa websites. On 9th December 2010, prior to a sustained DDoS attack on the Paypal website that caused a minor slowdown to their service, Paypal announced on its blog that they would release all remaining funds in the account of the Wau Holland Foundation that was raising funds for WikiLeaks, but would not reactivate the account. Regarding the attacks, WikiLeaks spokesman Kristinn Hrafnsson denied any relation to the group and said: “We neither condemn nor applaud these attacks. We believe they are a reflection of public opinion on the actions of the targets.” On the same day, a 16-year-old boy was arrested in The Hague, Netherlands, in connection with the distributed denial-of-service attacks against MasterCard and PayPal. The boy was an IRC operator under the nickname of Jeroenz0r.
On 10 December 2010, The Daily Telegraph reported that Anonymous had threatened to disrupt British government websites if Assange were extradited to Sweden. Anonymous issued a press release in an attempt to clarify the issue.

Operation Payback


Operation Payback is a coordinated, decentralized group of attacks on opponents of Internet piracy by Internet activists using the "Anonymous" moniker - a group sometimes affiliated with the website 4chan. Operation Payback started as retaliation to distributed denial of service (DDoS) attacks on torrent sites; piracy proponents then decided to launch DDoS attacks on piracy opponents. The initial reaction snowballed into a wave of attacks on major pro-copyright and anti-piracy organizations, law firms, and individuals. Following the United States diplomatic cables leak in December 2010, the organizers commenced DDoS attacks on websites of banks who had withdrawn banking facilities from WikiLeaks.


In 2010, several Bollywood companies hired Aiplex Software to launch DDoS attacks on websites that did not respond to takedown notices. Piracy activists then created Operation Payback in September 2010 in retaliation. The original plan was to attack Aiplex Software directly, but upon finding some hours before the planned DDoS that another individual had taken down the firm's website on their own, Operation Payback moved to launching attacks against the websites of copyright stringent organisations Motion Picture Association of America (MPAA) and International Federation of the Phonographic Industry, giving the two websites a combined total downtime of 30 hours. In the following two days, Operation Payback attacked a multitude of sites affiliated with the MPAA, the Recording Industry Association of America (RIAA), and British Phonographic Industry. Law firms such as ACS:Law, Davenport Lyons and Dunlap, Grubb & Weaver (of the US Copyright Group) were also attacked.