Download SCHOOL OF ENGINEERING NAMES: NELSON J

Document related concepts

Digital forensics wikipedia , lookup

Transcript
SCHOOL OF ENGINEERING
NAMES: NELSON J. CHIRWA
COURSE: COMPUTER HACKING FORENSICS INVESTIGATION
STUDENT NUMBER: 1412398872
BACHELOR OF SCIENCE IN INFORMATION SECURITY AND
COMPUTER FORENSICS
ASSIGNMENT NUMBER: 1 – 10
LECTURER: MR INNOCENT NSUNGA
1. Investigating Web Attacks
It is often the case that web applications face suspicious activities due to various reasons, such as
a kid scanning a website using an automated vulnerability scanner or a person trying to fuzz a
parameter for SQL Injection, etc. In many such cases, logs on the webserver have to be analyzed
to figure out what is going on. If it is a serious case, it may require a forensic investigation. Apart
from this, there are other scenarios as well. For an administrator, it is really important to
understand how to analyze the logs from a security standpoint. People who are just beginning
with hacking/penetration testing must understand why they should not test/scan websites without
prior permissions. This article covers the basic concepts of log analysis to provide solutions to
the above mentioned scenarios.
Setup
For demo purposes, I have the following setup.
Apache server
– Pre installed in Kali Linux
This can be started using the following command:
service apache2 start
MySQL
– Pre installed in Kali Linux
This can be started using the following command:
service mysql start
A vulnerable web application built using PHP-MySQL
I have developed a vulnerable web application using PHP and hosted it in the above mentioned
Apache-MySQL. With the above setup, I have scanned the URL of this vulnerable application
using few automated tools (ZAP, w3af) available in Kali Linux. Now let us see various cases in
analyzing the logs.
Logging in the Apache server
It is always recommended to maintain logs on a webserver for various obvious reasons.
The default location of Apache server logs on Debian systems is /var/log/apache2/access.log
Logging is just a process of storing the logs in the server. We also need to analyze the logs for
proper results. In the next section, we will see how we can analyze the Apache server’s access
logs to figure out if there are any attacks being attempted on the website.
Analyzing the logs
Manual inspection
In cases of logs with a smaller size, or if we are looking for a specific keyword, then we can
spend some time observing the logs manually using things like grep expressions. In the
following figure, we are trying to search for all the requests that have the keyword “union” in the
URL.
From the figure above, we can see the query “union select 1,2,3,4,5” in the URL. It is obvious
that someone with the IP address 192.168.56.105 has attempted SQL Injection. Similarly, we can
search for specific requests when we have the keywords with us. In the following figure, we are
searching for requests that try to read “/etc/passwd”, which is obviously a Local File Inclusion
attempt.
As shown in the above screenshot, we have many requests trying for LFI, and these are sent from
the IP address 127.0.0.1. These requests are generated from an automated tool. In many cases, it
is easy to recognize if the logs are sent from an automated scanner. Automated scanners are
noisy and they use vendor-specific payloads when testing an application. For example, IBM
appscan uses the word “appscan” in many payloads. So, looking at such requests in the logs, we
can determine what’s going on. Microsoft Excel is also a great tool to open the log file and
analyze the logs. We can open the log file using Excel by specifying “space” as a delimiter. This
comes handy when we don’t have a log-parsing tool. Aside from these keywords, it is highly
important to have basic knowledge of HTTP status codes during an analysis. Below is the table
that shows high-level information about HTTP status codes.
Information
Successful
Redirection
Client Error
Server Error
Web shells
Web shells are another problem for websites/servers. Web shells give complete control of the
server. In some instances, we can gain access to all the other sites hosted on the same server
using web shells. The following screenshot shows the same access.log file opened in Microsoft
Excel. I have applied a filter on the column that is specifying the file being accessed by the
client.
If we clearly observe, there is a file named “b374k.php” being accessed. “b374k” is a popular
web shell and hence this file is purely suspicious. Looking at the response code “200”, this line is
an indicator that someone has uploaded a web shell and is accessing it from the web server. It
doesn’t always need to be the scenario that the web shell being uploaded is given its original
name when uploading it onto the server. In many cases, attackers rename them to avoid
suspicion. This is where we have to act smart and see if the files being accessed are regular files
or if they are looking unusual. We can go further ahead and also see file types and the time
stamps if anything looks suspicious.
One single quote for the win
It is a known fact that SQL Injection is one of the most common vulnerabilities in web
applications. Most of the people who get started with web application security start their learning
with SQL Injection. Identifying a traditional SQL Injection is as easy as appending a single quote
to the URL parameter and breaking the query. Anything that we pass can be logged in the server,
and it is possible to trace back. The following screenshot shows the access log entry where a
single quote is passed to check for SQL Injection in the parameter “user”.%27 is URL encoded
form of a Single Quote.
For administration purposes, we can also perform query monitoring to see which queries are
executed on the database.
If we observe the above figure, it shows the query being executed from the request made in the
previous figure, where we are passing a single quote through the parameter “user”. We will
discuss more about logging in databases later in this article.
Analysis with automated tools
When there are huge amount of logs, it is difficult to perform manual inspection. In such
scenarios we can go for automated tools along with some manual inspection. Though there are
many effective commercial tools, I am introducing a free tool known as Scalp. According to their
official link, “Scalp is a log analyzer for the Apache web server that aims to look for security
problems. The main idea is to look through huge log files and extract the possible attacks that
have been sent through HTTP/GET.” It is a Python script, so it requires Python to be installed on
our machine. The following figure shows help for the usage of this tool.
As we can see in the figure, we need to feed the log file to be analyzed using the flag “–l”.Along
with that, we need to provide a filter file using the flag “-f” with which Scalp identifies the
possible attacks in the access.log file. We can use a filter from the PHPIDS project to detect any
malicious attempts. This file is named as default_filter.xml.
1 <filter>
2
<id>12</id>
3
<rule><![CDATA[(?:etc\/\W*passwd)]]></rule>
4
<description>Detects etc/passwd inclusion attempts</description>
5
<tags>
6
<tag>dt</tag>
7
<tag>id</tag>
8
<tag>lfi</tag>
9
10
11
</tags>
<impact>5</impact>
</filter>
It is using rule sets defined in XML tags to detect various attacks being attempted. The above
code snippet is an example to detect a File Inclusion attempt. Similarly, it detects other types of
attacks. After downloading this file, place it in the same folder where Scalp is placed. Run the
following command to analyze the logs with Scalp. python scalp-0.4.py –l
/var/log/apache2/access.log –f filter.xml –o output –html
‘output’ is the directory where the report will be saved. It will automatically be created by Scalp
if it doesn’t exist. –html is used to generate a report in HTML format.
As we can see in the above figure, Scalp results show that it has analyzed 4001 lines over 4024
and found 296 attack patterns. We can even save the lines that are not analyzed for some reason
using the “–except” flag. A report is generated in the output directory after running the above
command. We can open it in a browser and look at the results. The following screenshot shows a
small part of the output that shows directory traversal attack attempts.
Logging in MySQL
This section deals with analysis of attacks on databases and possible ways to monitor them. The
first step is to see what are the set variables. We can do it using “show variables;” as shown
below.
The following figure shows the output for the above command.
As we can see in the above figure, logging is turned on. By default, this value is OFF. Another
important entry here is “log output”, which is saying that we are writing them to a “FILE”.
Alternatively, we can use a table also. We can even see “log_slow_queries” is ON. Again, the
default value is “OFF”.
Query monitoring in MySQL
The general query log logs established client connections and statements received from clients.
As mentioned earlier, by default these are not enabled since they reduce performance. We can
enable them right from the MySQL terminal or we can edit the MySQL configuration file as
shown below. I am using VIM editor to open “my.cnf” file which is located under the
“/etc/mysql/” directory.
If we scroll down, we can see a Logging and Replication section where we can enable logging.
These logs are being written to a file called mysql.log file. We can also see the warning that this
log type is a performance killer. Usually administrators use this feature for troubleshooting
purposes.
We can also see the entry “log slow queries” to log queries that take a long duration.
Now everything is set. If someone hits the database with a malicious query, we can observe that
in these logs as shown below.
The above figure shows a query hitting the database named “web service” and trying for
authentication bypass using SQL Injection.
More logging
By default, Apache logs only GET requests. To log POST data, we can use an Apache module
called “mod_dumpio”. Alternatively, we can use ‘mod security’ to achieve the same result.
2.
INVESTIGATING DOS ATTACKS
TYPES OF DOS ATTACKS
Denial-of-service attacks are characterized by an explicit attempt by attackers to prevent
legitimate users of a service from using that service. In a DDoS attack, the incoming traffic
flooding the victim originates from many different sources – potentially hundreds of thousands
or more. This effectively makes it impossible to stop the attack simply by blocking a single IP
address; plus, it is very difficult to distinguish legitimate user traffic from attack traffic when
spread across so many points of origin. There are two general forms of DoS attacks: those that
crash services and those that flood services. The most serious attacks are distributed. Many
attacks involve forging of IP sender addresses (IP address spoofing) so that the location of the
attacking machines cannot easily be identified and so that the attack cannot be easily defeated
using ingress filtering.
Distributed DoS
A distributed denial-of-service (DDoS) is a cyber-attack where the perpetrator uses more than
one, often thousands of, unique IP addresses. The scale of DDoS attacks has continued to rise
over recent years, even reaching over 1Tbit/s.
Advanced persistent DoS
An advanced persistent DoS (APDoS) is more likely to be perpetrated by an advanced
persistent threat (APT): actors who are well resourced, exceptionally skilled and have access to
substantial commercial grade computer resources and capacity. APDoS attacks represent a clear
and emerging threat needing specialised monitoring and incident response services and the
defensive capabilities of specialised DDoS mitigation service providers. This type of attack
involves massive network layer DDoS attacks through to focused application layer (HTTP)
floods, followed by repeated (at varying intervals) SQLI and XSS attacks. Typically, the
perpetrators can simultaneously use from 2 to 5 attack vectors involving up to several tens of
millions of requests per second, often accompanied by large SYN floods that can not only attack
the victim but also any service provider implementing any sort of managed DDoS mitigation
capability. These attacks can persist for several weeks- the longest continuous period noted so far
lasted 38 days. This APDoS attack involved approximately 50+ petabits (100,000+ terabits) of
malicious traffic. Attackers in this scenario may (or often will) tactically switch between several
targets to create a diversion to evade defensive DDoS countermeasures but all the while
eventually concentrating the main thrust of the attack onto a single victim. In this scenario, threat
actors with continuous access to several very powerful network resources are capable of
sustaining a prolonged campaign generating enormous levels of un-amplified DDoS traffic.
APDoS attacks are characterised by:

advanced reconnaissance (pre-attack OSINT and extensive decoyed scanning crafted to
evade detection over long periods)

tactical execution (attack with a primary and secondary victim but focus is on primary)

explicit motivation (a calculated end game/goal target)

large computing capacity (access to substantial computer power and network bandwidth
resources)

simultaneous multi-threaded OSI layer attacks (sophisticated tools operating at layers 3
through 7)

persistence over extended periods (utilising all the above into a concerted, well managed
attack across a range of targets).
Denial-of-service as a service
Some vendors provide so-called "booter" or "stresser" services, which have simple web-based
front ends, and accept payment over the web. Marketed and promoted as stress-testing tools, they
can be used to perform unauthorized denial-of-service attacks, and allow technically
unsophisticated attackers access to sophisticated attack tools without the need for the attacker to
understand their use.
Symptoms
The United States Computer Emergency Readiness Team (US-CERT) has identified symptoms
of a denial-of-service attack to include:

unusually slow network performance (opening files or accessing web sites)

unavailability of a particular web site

inability to access any web site

dramatic increase in the number of spam emails received (this type of DoS attack is
considered an e-mail bomb).
Additional symptoms may include:

disconnection of a wireless or wired internet connection

long-term denial of access to the web or any internet services.
If the attack is conducted on a sufficiently large scale, entire geographical regions of Internet
connectivity can be compromised without the attacker's knowledge or intent by incorrectly
configured or flimsy network infrastructure equipment.
Attack techniques
A wide array of programs are used to launch DoS-attacks.
Attack tools
In cases such as MyDoom the tools are embedded in malware, and launch their attacks without
the knowledge of the system owner. Stacheldraht is a classic example of a DDoS tool. It utilizes
a layered structure where the attacker uses a client program to connect to handlers, which are
compromised systems that issue commands to the zombie agents, which in turn facilitate the
DDoS attack. Agents are compromised via the handlers by the attacker, using automated routines
to exploit vulnerabilities in programs that accept remote connections running on the targeted
remote hosts. Each handler can control up to a thousand agents.
In other cases a machine may become part of a DDoS attack with the owner's consent, for
example, in Operation Payback, organized by the group Anonymous. The LOIC has typically
been used in this way. Along with HOIC a wide variety of DDoS tools are available today,
including paid and free versions, with different features available. There is an underground
market for these in hacker related forums and IRC channels.
UK's GCHQ has tools built for DDoS, named PREDATORS FACE and ROLLING THUNDER.
Application-layer floods
Various DoS-causing exploits such as buffer overflow can cause server-running software to get
confused and fill the disk space or consume all available memory or CPU time. Other kinds of
DoS rely primarily on brute force, flooding the target with an overwhelming flux of packets,
oversaturating its connection bandwidth or depleting the target's system resources. Bandwidthsaturating floods rely on the attacker having higher bandwidth available than the victim; a
common way of achieving this today is via distributed denial-of-service, employing a botnet.
Another target of DDoS attacks may be to produce added costs for the application operator,
when the latter uses resources based on Cloud Computing. In this case normally application used
resources are tied to a needed Quality of Service level (e.g. responses should be less than 200
ms) and this rule is usually linked to automated software (e.g. Amazon CloudWatch to raise
more virtual resources from the provider in order to meet the defined QoS levels for the
increased requests. The main incentive behind such attacks may be to drive the application
owner to raise the elasticity levels in order to handle the increased application traffic, in order to
cause financial losses or force them to become less competitive. Other floods may use specific
packet types or connection requests to saturate finite resources by, for example, occupying the
maximum number of open connections or filling the victim's disk space with logs. A "banana
attack" is another particular type of DoS. It involves redirecting outgoing messages from the
client back onto the client, preventing outside access, as well as flooding the client with the sent
packets. A LAND attack is of this type. An attacker with shell-level access to a victim's
computer may slow it until it is unusable or crash it by using a fork bomb. A kind of applicationlevel DoS attack is XDoS (or XML DoS) which can be controlled by modern web application
firewalls(WAFs).
Degradation-of-service attacks
"Pulsing" zombies are compromised computers that are directed to launch intermittent and shortlived floodings of victim websites with the intent of merely slowing it rather than crashing it.
This type of attack, referred to as "degradation-of-service" rather than "denial-of-service", can be
more difficult to detect than regular zombie invasions and can disrupt and hamper connection to
websites for prolonged periods of time, potentially causing more disruption than concentrated
floods. Exposure of degradation-of-service attacks is complicated further by the matter of
discerning whether the server is really being attacked or under normal traffic loads.
Denial-of-service Level II
The goal of DoS L2 (possibly DDoS) attack is to cause a launching of a defense mechanism
which blocks the network segment from which the attack originated. In case of distributed attack
or IP header modification (that depends on the kind of security behavior) it will fully block the
attacked network from the Internet, but without system crash.
Distributed DoS attack
A distributed denial-of-service (DDoS) attack occurs when multiple systems flood the bandwidth
or resources of a targeted system, usually one or more web servers. Such an attack is often the
result of multiple compromised systems (for example, a botnet) flooding the targeted system
with traffic. A botnet is a network of zombie computers programmed to receive commands
without the owners' knowledge. When a server is overloaded with connections, new connections
can no longer be accepted. The major advantages to an attacker of using a distributed denial-ofservice attack are that multiple machines can generate more attack traffic than one machine,
multiple attack machines are harder to turn off than one attack machine, and that the behavior of
each attack machine can be stealthier, making it harder to track and shut down. These attacker
advantages cause challenges for defense mechanisms. For example, merely purchasing more
incoming bandwidth than the current volume of the attack might not help, because the attacker
might be able to simply add more attack machines. This, after all, will end up completely
crashing a website for periods of time.
Malware can carry DDoS attack mechanisms; one of the better-known examples of this
was MyDoom. Its DoS mechanism was triggered on a specific date and time. This type of DDoS
involved hardcoding the target IP address prior to release of the malware and no further
interaction was necessary to launch the attack. A system may also be compromised with a trojan,
allowing the attacker to download a zombie agent, or the trojan may contain one. Attackers can
also break into systems using automated tools that exploit flaws in programs that listen for
connections from remote hosts. This scenario primarily concerns systems acting as servers on the
web. Stacheldraht is a classic example of a DDoS tool. It utilizes a layered structure where the
attacker uses a client program to connect to handlers, which are compromised systems that issue
commands to the zombie agents, which in turn facilitate the DDoS attack. Agents are
compromised via the handlers by the attacker, using automated routines to exploit vulnerabilities
in programs that accept remote connections running on the targeted remote hosts. Each handler
can control up to a thousand agents. In some cases a machine may become part of a DDoS attack
with the owner's consent, for example, in Operation Payback, organized by the
group Anonymous. These attacks can use different types of internet packets such as: TCP, UDP,
ICMP etc.
These collections of systems compromisers are known as botnets / rootservers. DDoS tools
like Stacheldraht still use classic DoS attack methods centered on IP spoofing and amplification
like smurf attacks and fraggle attacks (these are also known as bandwidth consumption
attacks). SYN floods (also known as resource starvation attacks) may also be used. Newer tools
can use DNS servers for DoS purposes. Unlike MyDoom's DDoS mechanism, botnets can be
turned against any IP address. Script kiddies use them to deny the availability of well-known
websites to legitimate users. More sophisticated attackers use DDoS tools for the purposes
of extortion – even against their business rivals.
Simple attacks such as SYN floods may appear with a wide range of source IP addresses, giving
the appearance of a well distributed DoS. These flood attacks do not require completion of the
TCP three way handshake and attempt to exhaust the destination SYN queue or the server
bandwidth. Because the source IP addresses can be trivially spoofed, an attack could come from
a limited set of sources, or may even originate from a single host. Stack enhancements such
as syn cookiesmay be effective mitigation against SYN queue flooding, however complete
bandwidth exhaustion may require involvement.
If an attacker mounts an attack from a single host it would be classified as a DoS attack. In fact,
any attack against availability would be classed as a denial-of-service attack. On the other hand,
if an attacker uses many systems to simultaneously launch attacks against a remote host, this
would be classified as a DDoS attack.
It has been reported that there are new attacks from internet of things which have been involved
in denial of service attacks. In one noted attack that was made peaked at around 20,000 requests
per second which came from around 900 CCTV cameras. UK's GCHQ has tools built for DDoS,
named PREDATORS FACE and ROLLING THUNDER.
DDoS extortion
In 2015, DDoS botnets such as DD4BC grew in prominence, taking aim at financial institutions.
Cyber-extortionists typically begin with a low-level attack and a warning that a larger attack will
be carried out if a ransom is not paid in Bitcoin. Security experts recommend targeted websites
to not pay the ransom. The attackers tend to get into an extended extortion scheme once they
recognize that the target is ready to pay.
HTTP POST DoS attack
First discovered in 2009, the HTTP POST attack sends a complete, legitimate HTTP POST
header, which includes a 'Content-Length' field to specify the size of the message body to follow.
However, the attacker then proceeds to send the actual message body at an extremely slow rate
(e.g. 1 byte/110 seconds). Due to the entire message being correct and complete, the target server
will attempt to obey the 'Content-Length' field in the header, and wait for the entire body of the
message to be transmitted, which can take a very long time. The attacker establishes hundreds or
even thousands of such connections, until all resources for incoming connections on the server
(the victim) are used up, hence making any further (including legitimate) connections impossible
until all data has been sent. It is notable that unlike many other (D) DoS attacks, which try to
subdue the server by overloading its network or CPU, a HTTP POST attack targets the logical
resources of the victim, which means the victim would still have enough network bandwidth and
processing power to operate. Further combined with the fact that Apache will, by default, accept
requests up to 2GB in size, this attack can be particularly powerful. HTTP POST attacks are
difficult to differentiate from legitimate connections, and are therefore able to bypass some
protection systems. OWASP, an open source web application security project, has released
a testing tool to test the security of servers against this type of attacks.
Internet Control Message Protocol (ICMP) flood
A smurf attack relies on misconfigured network devices that allow packets to be sent to all
computer hosts on a particular network via the broadcast address of the network, rather than a
specific machine. The attacker will send large numbers of IP packets with the source address
faked to appear to be the address of the victim. The network's bandwidth is quickly used up,
preventing legitimate packets from getting through to their destination.
Ping flood is based on sending the victim an overwhelming number of ping packets, usually
using the "ping" command from Unix-like hosts (the -t flag on Windows systems is much less
capable of overwhelming a target, also the -l (size) flag does not allow sent packet size greater
than 65500 in Windows). It is very simple to launch, the primary requirement being access to
greater bandwidth than the victim.
Ping of death is based on sending the victim a malformed ping packet, which will lead to a
system crash on a vulnerable system.
The Black Nurse attack is an example of an attack taking advantage of the required Destination
Port Unreachable ICMP packets.
Nuke
A Nuke is an old denial-of-service attack against computer networks consisting of fragmented or
otherwise invalid ICMP packets sent to the target, achieved by using a modified ping utility to
repeatedly send this corrupt data, thus slowing down the affected computer until it comes to a
complete stop.
A specific example of a nuke attack that gained some prominence is the Win Nuke, which
exploited the vulnerability in the NetBIOS handler in Windows 95. A string of out-of-band data
was sent to TCP port 139 of the victim's machine, causing it to lock up and display a Blue Screen
of Death (BSOD).
Peer-to-peer attacks
Attackers have found a way to exploit a number of bugs in peer-to-peer servers to initiate DDoS
attacks. The most aggressive of these peer-to-peer-DDoS attacks exploits DC++. With peer-topeer there is no botnet and the attacker does not have to communicate with the clients it subverts.
Instead, the attacker acts as a "puppet master," instructing clients of large peer-to-peer file
sharing hubs to disconnect from their peer-to-peer network and to connect to the victim's website
instead.
Permanent denial-of-service attacks
Permanent denial-of-service (PDoS), also known loosely as phlashing, is an attack that damages
a system so badly that it requires replacement or reinstallation of hardware. Unlike the
distributed denial-of-service attack, a PDoS attack exploits security flaws which allow remote
administration on the management interfaces of the victim's hardware, such as routers, printers,
or other networking hardware. The attacker uses these vulnerabilities to replace a
device's firmware with a modified, corrupt, or defective firmware image—a process which when
done legitimately is known as flashing. This therefore "bricks" the device, rendering it unusable
for its original purpose until it can be repaired or replaced. The PDoS is a pure hardware targeted
attack which can be much faster and requires fewer resources than using a botnet or a
root/vserver in a DDoS attack. Because of these features, and the potential and high probability
of security exploits on Network Enabled Embedded Devices (NEEDs), this technique has come
to the attention of numerous hacking communities. PhlashDance is a tool created by Rich Smith
(an employee of Hewlett-Packard's Systems Security Lab) used to detect and demonstrate PDoS
vulnerabilities at the 2008 EUSecWest Applied Security Conference in London.
Reflected / spoofed attack
A distributed denial-of-service attack may involve sending forged requests of some type to a
very large number of computers that will reply to the requests. Using Internet Protocol address
spoofing, the source address is set to that of the targeted victim, which means all the replies will
go to (and flood) the target. (This reflected attack form is sometimes called a "DRDOS")
ICMP Echo Request attacks (Smurf attack) can be considered one form of reflected attack, as the
flooding host(s) send Echo Requests to the broadcast addresses of mis-configured networks,
thereby enticing hosts to send Echo Reply packets to the victim. Some early DDoS programs
implemented a distributed form of this attack.
Amplification
Amplification attacks are used to magnify the bandwidth that is sent to a victim. This is typically
done through publicly accessible DNS servers that are used to cause congestion on the target
system using DNS response traffic. Many services can be exploited to act as reflectors, some
harder to block than others. US-CERT have observed that different services implies in different
amplification factors, as you can see below:
UDP-based Amplification Attacks
Protocol
Bandwidth Amplification Factor
NTP
556.9
CharGen
358.8
DNS
up to 179
QOTD
140.3
Quake Network Protocol 63.9
BitTorrent
4.0 - 54.3
SSDP
30.8
Kad
16.3
SNMPv2
6.3
Steam Protocol
5.5
NetBIOS
3.8
DNS amplification attacks involve a new mechanism that increased the amplification effect,
using a much larger list of DNS servers than seen earlier. The process typically involves an
attacker sending a DNS name look up request to a public DNS server, spoofing the source IP
address of the targeted victim. The attacker tries to request as much zone information as possible,
thus amplifying the DNS record response that is sent to the targeted victim. Since the size of the
request is significantly smaller than the response, the attacker is easily able to increase the
amount of traffic directed at the target. [38][39] SNMP and NTP can also be exploited as reflector
in an amplification attack.
An example of an amplified DDoS attack through NTP is through a command called monlist,
which sends the details of the last 600 people who have requested the time from that computer
back to the requester. A small request to this time server can be sent using a spoofed source IP
address of some victim, which results in 556.9 times the amount of data that was requested back
to the victim. This becomes amplified when using botnets that all send requests with the same
spoofed IP source, which will send a massive amount of data back to the victim.
It is very difficult to defend against these types of attacks because the response data is coming
from legitimate servers. These attack requests are also sent through UDP, which does not require
a connection to the server. This means that the source IP is not verified when a request is
received by the server. In order to bring awareness of these vulnerabilities, campaigns have been
started that are dedicated to finding amplification vectors which has led to people fixing their
resolvers or having the resolvers shut down completely.
R-U-Dead-Yet? (RUDY)
RUDY attack targets web applications by starvation of available sessions on the web server.
Much like Slowloris, RUDY keeps sessions at halt using never-ending POST transmissions and
sending an arbitrarily large content-length header value.
Slow Read attack
Slow Read attack sends legitimate application layer requests but reads responses very slowly,
thus trying to exhaust the server's connection pool. Slow reading is achieved by advertising a
very small number for the TCP Receive Window size and at the same time by emptying clients'
TCP receive buffer slowly. That naturally ensures a very low data flow rate.
Sophisticated low-bandwidth Distributed Denial-of-Service Attack
A sophisticated low-bandwidth DDoS attack is a form of DoS that uses less traffic and increases
their effectiveness by aiming at a weak point in the victim's system design, i.e., the attacker
sends traffic consisting of complicated requests to the system. Essentially, a sophisticated DDoS
attack is lower in cost due to its use of less traffic, is smaller in size making it more difficult to
identify, and it has the ability to hurt systems which are protected by flow control mechanisms.
(S)SYN flood
A SYN flood occurs when a host sends a flood of TCP/SYN packets, often with a forged sender
address. Each of these packets are handled like a connection request, causing the server to spawn
a half-open connection, by sending back a TCP/SYN-ACK packet (Acknowledge), and waiting
for a packet in response from the sender address (response to the ACK Packet). However,
because the sender address is forged, the response never comes. These half-open connections
saturate the number of available connections the server can make, keeping it from responding to
legitimate requests until after the attack ends.
Teardrop attacks
A teardrop attack involves sending mangled IP fragments with overlapping, oversized payloads
to the target machine. This can crash various operating systems because of a bug in
their TCP/IP fragmentation re-assembly code. Windows 3.1x, Windows 95 and Windows
NT operating systems, as well as versions of Linux prior to versions 2.0.32 and 2.1.63 are
vulnerable to this attack.
(Although in September 2009, a vulnerability in Windows Vista was referred to as a "teardrop
attack", this targeted SMB2which is a higher layer than the TCP packets that teardrop used).
One of the fields in an IP header is the “fragment offset” field, indicating the starting position, or
offset, of the data contained in a fragmented packet relative to the data in the original packet. If
the sum of the offset and size of one fragmented packet differs from that of the next fragmented
packet, the packets overlap. When this happens, a server vulnerable to teardrop attacks is unable
to reassemble the packets - resulting in a denial-of-service condition.
Telephony denial-of-service (TDoS)
Voice over IP has made abusive origination of large numbers of telephone voice calls
inexpensive and readily automated while permitting call origins to be misrepresented
through caller ID spoofing.
According to the US Federal Bureau of Investigation, telephony denial-of-service (TDoS) has
appeared as part of various fraudulent schemes:

A scammer contacts the victim's banker or broker, impersonating the victim to request a
funds transfer. The banker's attempt to contact the victim for verification of the transfer fails
as the victim's telephone lines are being flooded with thousands of bogus calls, rendering the
victim unreachable.

A scammer contacts consumers with a bogus claim to collect an outstanding payday loan for
thousands of dollars. When the consumer objects, the scammer retaliates by flooding the
victim's employer with thousands of automated calls. In some cases, displayed caller ID is
spoofed to impersonate police or law enforcement agencies.

A scammer contacts consumers with a bogus debt collection demand and threatens to send
police; when the victim balks, the scammer floods local police numbers with calls on which
caller ID is spoofed to display the victims number. Police soon arrive at the victim's
residence attempting to find the origin of the calls.
Telephony denial-of-service can exist even without Internet telephony. In the 2002 New
Hampshire Senate election phone jamming scandal, telemarketers were used to flood political
opponents with spurious calls to jam phone banks on election day. Widespread publication of a
number can also flood it with enough calls to render it unusable, as happened with multiple +1area code-867-5309 subscribers inundated by hundreds of misdialed calls daily in response to the
song 867-5309/Jenny.
TDoS differs from other telephone harassment (such as prank calls and obscene phone calls) by
the number of calls originated; by occupying lines continuously with repeated automated calls,
the victim is prevented from making or receiving both routine and emergency telephone calls.
Related exploits include SMS flooding attacks and black fax or fax loop transmission.
Defense techniques
Defensive responses to denial-of-service attacks typically involve the use of a combination of
attack detection, traffic classification and response tools, aiming to block traffic that they identify
as illegitimate and allow traffic that they identify as legitimate. A list of prevention and response
tools is provided below:
Application front end hardware
Application front-end hardware is intelligent hardware placed on the network before traffic
reaches the servers. It can be used on networks in conjunction with routers and switches.
Application front end hardware analyzes data packets as they enter the system, and then
identifies them as priority, regular, or dangerous. There are more than 25 bandwidth
management vendors.
Application level Key Completion Indicators
In order to meet the case of application level DDoS attacks against cloud-based applications,
approaches may be based on an application layer analysis, to indicate whether an incoming
traffic bulk is legitimate or not and thus enable the triggering of elasticity decisions without the
economical implications of a DDoS attack. These approaches mainly rely on an identified path
of value inside the application and monitor the macroscopic progress of the requests in this path,
towards the final generation of profit, through markers denoted as Key Completion Indicators
Blackholing and sinkholing
With blackhole routing, all the traffic to the attacked DNS or IP address is sent to a "black hole"
(null interface or a non-existent server). To be more efficient and avoid affecting network
connectivity, it can be managed by the ISP. A DNS sinkhole routes traffic to a valid IP address
which analyzes traffic and rejects bad packets. Sinkholing is not efficient for most severe attacks.
IPS based prevention
Intrusion prevention systems (IPS) are effective if the attacks have signatures associated with
them. However, the trend among the attacks is to have legitimate content but bad intent.
Intrusion-prevention systems which work on content recognition cannot block behavior-based
DoS attacks.
An ASIC based IPS may detect and block denial-of-service attacks because they have
the processing power and the granularity to analyze the attacks and act like a circuit breaker in an
automated way. A rate-based IPS (RBIPS) must analyze traffic granularly and continuously
monitor the traffic pattern and determine if there is traffic anomaly. It must let the legitimate
traffic flow while blocking the DoS attack traffic.
DDS based defense
More focused on the problem than IPS, a DoS defense system (DDS) can block connectionbased DoS attacks and those with legitimate content but bad intent. A DDS can also address both
protocol attacks (such as teardrop and ping of death) and rate-based attacks (such as ICMP
floods and SYN floods).
Firewalls
In the case of a simple attack, a firewall could have a simple rule added to deny all incoming
traffic from the attackers, based on protocols, ports or the originating IP addresses.
More complex attacks will however be hard to block with simple rules: for example, if there is
an ongoing attack on port 80 (web service), it is not possible to drop all incoming traffic on this
port because doing so will prevent the server from serving legitimate traffic. Additionally,
firewalls may be too deep in the network hierarchy, with routers being adversely affected before
the traffic gets to the firewall.
Routers
Similar to switches, routers have some rate-limiting and ACL capability. They, too, are manually
set. Most routers can be easily overwhelmed under a DoS attack. Cisco IOS has optional features
that can reduce the impact of flooding.
Switches
Most switches have some rate-limiting and ACL capability. Some switches provide automatic
and/or system-wide rate limiting, traffic shaping, delayed binding (TCP splicing), deep packet
inspection and Bogon filtering (bogus IP filtering) to detect and remediate DoS attacks through
automatic rate filtering and WAN Link failover and balancing.
These schemes will work as long as the DoS attacks can be prevented by using them. For
example, SYN flood can be prevented using delayed binding or TCP splicing. Similarly, content
based DoS may be prevented using deep packet inspection. Attacks originating from dark
addresses or going to dark addresses can be prevented using bogon filtering. Automatic rate
filtering can work as long as set rate-thresholds have been set correctly. Wan-link failover will
work as long as both links have DoS/DDoS prevention mechanism.
Upstream filtering
All traffic is passed through a "cleaning center" or a "scrubbing center" via various methods such
as proxies, tunnels, digital cross connects, or even direct circuits, which separates "bad" traffic
(DDoS and also other common internet attacks) and only sends good traffic beyond to the server.
The provider needs central connectivity to the Internet to manage this kind of service unless they
happen to be located within the same facility as the "cleaning center" or "scrubbing center".
Examples of providers of this service:

CloudFlare

Level 3 Communications

Radware

Arbor Networks

AT&T

F5 Networks

Incapsula

Neustar Inc

Akamai Technologies

Tata Communications

Verisign

Verizon
Unintentional denial-of-service
An Unintentional denial-of-service can occur when a system ends up denied, not due to a
deliberate attack by a single individual or group of individuals, but simply due to a sudden
enormous spike in popularity. This can happen when an extremely popular website posts a
prominent link to a second, less well-prepared site, for example, as part of a news story. The
result is that a significant proportion of the primary site's regular users – potentially hundreds of
thousands of people – click that link in the space of a few hours, having the same effect on the
target website as a DDoS attack. A VIPDoS is the same, but specifically when the link was
posted by a celebrity.
When Michael Jackson died in 2009, websites such as Google and Twitter slowed down or even
crashed. Many sites' servers thought the requests were from a virus or spyware trying to cause a
denial-of-service attack, warning users that their queries looked like "automated requests from
a computer virus or spyware application".
News sites and link sites – sites whose primary function is to provide links to interesting content
elsewhere on the Internet – are most likely to cause this phenomenon. The canonical example is
the Slashdot effect when receiving traffic from Slashdot. It is also known as "the Reddit hug of
death" and "the Digg effect".
Routers have also been known to create unintentional DoS attacks, as both DLink and Netgear routers have overloaded NTP servers by flooding NTP servers without
respecting the restrictions of client types or geographical limitations.
Similar unintentional denials-of-service can also occur via other media, e.g. when a URL is
mentioned on television. If a server is being indexed by Google or another search engine during
peak periods of activity, or does not have a lot of available bandwidth while being indexed, it can
also experience the effects of a DoS attack.
Legal action has been taken in at least one such case. In 2006, Universal Tube & Rollform
Equipment Corporation sued YouTube: massive numbers of would-be youtube.com users
accidentally typed the tube company's URL, utube.com. As a result, the tube company ended up
having to spend large amounts of money on upgrading their bandwidth.[71] The company appears
to have taken advantage of the situation, with utube.com now containing ads for advertisement
revenue.
In March 2014, after Malaysia Airlines Flight 370 went missing, DigitalGlobe launched
a crowdsourcing service on which users could help search for the missing jet in satellite images.
The response overwhelmed the company's servers.
An unintentional denial-of-service may also result from a prescheduled event created by the
website itself, as was the case of the Census in Australia in 2016. This could be caused when a
server provides some service at a specific time. This might be a university website setting the
grades to be available where it will result in many more login requests at that time than any
other.
Side effects of attacks
Backscatter
In computer network security, backscatter is a side-effect of a spoofed denial-of-service attack.
In this kind of attack, the attacker spoofs (or forges) the source address in IP packets sent to the
victim. In general, the victim machine cannot distinguish between the spoofed packets and
legitimate packets, so the victim responds to the spoofed packets as it normally would. These
response packets are known as backscatter.
If the attacker is spoofing source addresses randomly, the backscatter response packets from the
victim will be sent back to random destinations. This effect can be used by network telescopes as
indirect evidence of such attacks.
The term "backscatter analysis" refers to observing backscatter packets arriving at a statistically
significant portion of the IP address space to determine characteristics of DoS attacks and
victims.
Legality
Many jurisdictions have laws under which denial-of-service attacks are illegal.

In the US, denial-of-service attacks may be considered a federal crime under the Computer
Fraud and Abuse Act with penalties that include years of imprisonment. The Computer
Crime and Intellectual Property Section of the US Department of Justice handles cases of
(D)DoS.

In European countries, committing criminal denial-of-service attacks may, as a minimum,
lead to arrest. The United Kingdom is unusual in that it specifically outlawed denial-ofservice attacks and set a maximum penalty of 10 years in prison with the Police and Justice
Act 2006, which amended Section 3 of the Computer Misuse Act 1990.
On January 7, 2013, Anonymous posted a petition on the whitehouse.gov site asking that DDoS
be recognized as a legal form of protest similar to the Occupy protests, the claim being that the
similarity in purpose of both are same.
3 INVESTIGATING INTERNET CRIMES
Tracing IP addresses
Internet Protocol (IP) addresses provide the basis for online communication, allowing devices to
interface and communicate with one another as they are connected to the Internet. As was noted
in Chapter 3, IP addresses provide investigators a trail to discover and follow, which hopefully
leads to the person(s) responsible for some online malfeasance. In Chapter 5 and 6, we discussed
different tools that investigators can use to examine various parts of the Internet, including
identifying the owners of domains and IP addresses. In this chapter, we are going to discuss
tracing an IP address and the investigative advantages of this process. We have covered the tools
to help us trace IP addresses in previous chapters, but here we want to walk through the process
of identifying the IP to trace and who is behind that address.
Online tools for tracing an IP address
Tracing IP addresses and domains is a fundamental skill for any Internet investigator. There are
many resources available on the Internet to assist in this process. Of primary importance are the
entities responsible for the addressing system, namely, the Internet Assigned Number Authority
(IANA) and its subordinate bodies the Regional Internet Registries (RIR). In addition to IANA
and RIR, there are a multitude of other independent online resources that can assist the
investigator in conducting basic IP identification.
E-Zine
Dedicated CISO job still open to debate

E-Zine
Insider Edition: Improved threat detection and incident response

E-Handbook
How to build an incident response toolkit for enterprise security
IANA and RIR
Starting at the top is IANA. According to their website they are ". . .responsible for the global
coordination of the DNS Root, IP addressing and other Internet protocol resources." What this
means to the investigator is that they manage and assign the top level domains, that is, .com, org,
mil, edu. (see Table 3.6 for additional examples) and coordinate the IP addresses and their
allocation to the RIR. IANA established the RIR to allocate IP address in geographical regions.
The RIR system evolved over time, eventually dividing the world into the following five regions:
1. African Network Information Centre (AfriNIC) for Africa, http://www.afrinic.net/
2. American Registry for Internet Numbers (ARIN) for the United States, Canada, several parts
of the Caribbean region, and Antarctica, https://www.arin.net/
3. Asia-Pacific Network Information Centre (APNIC) for Asia, Australia, New Zealand, and
neighboring countries, http://www.apnic.net/
4. Latin America and Caribbean Network Information Centre (LACNIC) for Latin America and
parts of the Caribbean region, http://www.lacnic.net/en/web/lacnic/inicio
5. Réseaux IP Européens Network Coordination Centre (RIPE NCC) for Europe,
Russia, http://http://www.ripe.net/
Each site has a search "Who is" function that allows the investigator to identify IP registration
information. IANA and the RIR are the official registrars and owners of the domain records and
IP addresses. An investigator wishing to verify the owner of an IP can use the RIR to locate the
records.
Internet commercial and freeware tools
There are also many Internet sites to look up IP and Domain registrations. Some provide the
basic registration information and other sites combine additional tools that enable the
investigator to identify an IP's physical location.
DNS Stuff (http://www.dnsstuff.com/tools/tools): This website has been around for a number of
years. It offers both free and pay options for assisting in IP addresses identification and other
online information. Network-Tools.com (http://network-tools.com): Another website with a
simple user interface to assist in IP tracing. CentralOps.net (http://centralops.net/co/): This is
another website that assists with your IP tracing. One of its features, Domain Dossier, does
multiple lookups on an IP address or domain. In some circumstances, the investigator may look
up a domain or and IP address with these commercial tools and find the address concealed by the
commercial registrar. In these cases, the investigator may need to go to the commercial registrar's
site and use the Who is search located there to determine the domain registration records. Each
of the mentioned websites presents the domain registration information in a slightly different
manner and may have additional tools useful to the investigator. Experience with each will
provide the investigator with a better understanding of each site's features.
Geolocation of an IP address
Geolocation in general refers to the identification of the real geographical area of an electronic
device, such as a cell phone, IP addresses, WiFi, and MAC addresses. Now that being said that
does not mean an IP address can be traced directly to a house. Geolocation particularly for IP
addresses is not an exact science. Unlike cell phones that can be traced via their GPS coordinates
or cell tower triangulation, IP addresses use a common database of address locations maintained
by different companies. One of the most commonly used databases is maintained by Maxmind,
Inc. which can be found at www.maxmind.com. Maxmind provides a free service to geolocate an
IP address to a state or city. Purchasing their services can give the Internet investigator access to
a more precise location, up to and including physical addresses. There are other online services
that provide geolocation identification of IP addresses such as IP2Location.com. Some
investigative tools, such as Vere Software's WebCase, include access to the Maxmind database
as a feature of its domain lookup. On Maxmind's website you can use their demo function to
identify an IP addresses location. An example of a Maxmind search for the geolocation of IP
address 97.74.74.204 is shown in Figure 8.1.
Along with identifying the geolocation of the address as Scottsdale, Arizona, website provides
the latitude and longitude based on this location and the Internet Service Provider (ISP) hosting
the IP address, in this case GoDaddy.com LLC.
. Tracking Emails and Investigating Email Crimes
4
Tracing Email and News Postings
Before heading down the messaging path and looking for tracks in the sand, let's quickly discuss
how these messaging services operate. News groups and email are cousins. Descending from
original siblings on pre-Internet Unix systems, they have continued to evolve in parallel, with
much sharing of genetic material. Both services have the following attributes:

Simple Internet application protocols that use text commands

Store-and-forward architecture allowing messages to be shuttled through a series of
intermediate systems

Message body composed entirely of printable characters (7-bit, not 8-bit)

Human-readable message headers indicating the path between sender and receiver
You'll need the assistance of systems administrators, perhaps on every system the message
transited, and they won't be able to help you unless they have logging information on their
messaging hosts. If the originator wants to cover his or her tracks, determining the real sender of
either bogus news postings or suspicious email can be challenging. News is probably a bit easier,
but email is more common today, so let's start with it.
Tracking Email
An email program such as Outlook, Notes, or Eudora is considered a client application, which
means that it is network-enabled software that is intended to interact with a server. In the case of
email, it is normal to interact with two different servers: one for outgoing and one for incoming
mail. When you want to read email, your client connects to a mail server using one of three
different protocols:

Post Office Protocol (POP, not to be confused with Point of Presence)

Internet Mail Access Protocol (IMAP)

Microsoft's Mail API (MAPI)
For the purposes of investigation, the protocol used to gather incoming email from a server is of
minimal interest. The most important thing to understand about these different protocols is that
their use affects where mail messages are stored (as depicted in Table 2-1). All incoming mail is
initially stored on a mail server, sorted by that mail server into individual mailboxes for access
by the addressee. POP users have the choice of either downloading a copy of their mail from
their server, or downloading it and subsequently allowing it to be automatically deleted. Email
that has been read or stored for future use is stored on the computer that is running the email
client. IMAP and MAPI users have the option of leaving all their mail on their mail server.
There are two major advantages to leaving email stored on the server. First, all of the stored
email for an entire organization can be easily backed up from a central location. Second, it
provides users the flexibility of accessing their mailboxes from multiple client machines: office,
home, through the Web, and so forth. The implications of this to the investigator is that POP mail
users always use their local machine for their email archives: copies of outgoing mail, mail
stored in folders for future reference, deleted mail that hasn't been purged, all are stored on the
individual's workstation. Organizations that provide IMAP or MAPI service, or a proprietary
service like Lotus Notes, probably store email on the server, although individual users may or
may not have the option of storing their email locally.
Table 2-1 Internet Email Protocols
Post Office
Protocol
Relevance to Investigation
POP
Must access workstation in
Service
Incoming
message store
order to trace mail.
only
Storage of all
Open: MAPI
messages
(optional)
Copies of both incoming
and outgoing messages may
Proprietary:
Microsoft MAPI
be stored on server or
workstation (and
Post Office
Protocol
Relevance to Investigation
Service
Lotus
Web-based:
server/workstation backup
Notes
tapes).
http
Incoming and outgoing
messages are stored on
send and receive
server, possibly with
optional manual download
to workstation. Facilitates
identity spoofing.
Outgoing email uses a completely different protocol called Simple Mail Transfer Protocol
(SMTP). Unlike the protocols used to retrieve mail from a post office, SMTP doesn't require any
authentication—it is much like tossing a message into a mail slot at the post office. Servers that
accept mail and relay it to other mail servers are sometimes called mail transfer agents (MTAs),
and they also use SMTP. Your ISP will give you the name of the mail server that you should use
for outgoing mail, often something along the lines of smtp.bobsisp.com. The SMTP server that
the ISP uses relays messages to their destinations. Either the destination server recognizes a
message as being addressed to one of its local users, and places it into the appropriate mailbox
for that user, or based on a set of rules, it relays the message on further.
SMTP is a very simple protocol. Like many Internet protocols, such as HTTP, it consists of a
few simple text-based commands or keywords. One of the first tricks an Internet hacker learns is
how to manually send an email message by telneting to port 25, the SMTP port. Not only is it a
fun trick to become a human email forwarder, but it also enables you to put any information you
want into the headers of the email message you are sending—including fake origination and
return addresses. Actually, you needn't do this manually if you want to fake email. When you
configure your personal email client, you tell it what return address to put on your outgoing mail.
You can always change that configuration, but if you want to send only a single message coming
from [email protected], it is much easier to use one of several GUI-based hacker tools that
enable you to quickly send a message with your choice of return addresses.
SMTP mail has no strong authentication and without using PGP or S/MIME (Secure
Multipurpose Internet Mail Extensions) to add a digital signature, the level of trust associated
with a mail message is pretty low. The following steps (our input is in boldface) show how
incredibly easy it is to fake the return address in an Inter-net mail message:
[root@njektd /root]# telnet localhost 25
Trying 127.0.0.1...
Connected to njektd.com.
Escape character is '^]'.
220 njektd.com ESMTP Sendmail 8.9.3/8.9.3; Tue, 5 Dec 2000 17:37:02 –
0500
helo
250 OK
mail from: [email protected]
250 [email protected] Sender ok
rcpt to: [email protected]
250 [email protected] Recipient ok
data
354
Haha-this is a spoofed mail message!
.
250 RAA25927 Message accepted for delivery
quit
221 njektd.com closing connection
Connection closed by foreign host.
The results of this spoofed mail message are shown in Figure 2-4. The test.com domain is just
one we made up for demonstration purposes, but the email client reports whatever information it
was provided.
Figure 2-4 Reading the spoofed mail message
As we'll discuss later in this chapter, some identification information is associated with the mail
header that is a bit harder to spoof. As recorded in the following mail header, we were indeed
logged in as 208.164.95.173:
Received: from dhcp-95–173.ins.com ([208.164.95.173]) by
dtctxexchims01.ins.com with SMTP (Microsoft Exchange Internet Mail
Service Version 5.5.2653.13)
id YM4CM2VP; Sun, 10 Dec 2000 08:46:30 -0600
From: [email protected]
Date: Sun, 10 Dec 2000 09:46:46 -0500 (EST)
Message-Id: [email protected]
When relaying a message from another relay host, current versions of SMTP also keep track of
the IP address of the system connecting to them, and they add that IP address to the header of the
message. If you want to show that a mail message originated from a specific computer, the best
way to do so is to investigate the entire path that appears in the extended header information.
Although SMTP servers won't perform any authentication when receiving mail from the client,
most Internet mail servers are configured to accept mail for relay only when that mail originates
from a specific range of IP addresses. If an ISP does not place any limits on which systems can
connect to the ISP's mail server, allowing it to be used as a free relay station, it won't take
spammers long to find it. To reduce the amount of spam mail that originates with their mail
servers, most ISPs allow relay connections only from IP addresses within the range that they
assign to their own subscribers. Authentication based just on IP address is very weak, but for the
purposes of preventing unauthorized use of SMTP servers, it is adequate.
It should come as no surprise that Web-based email is not only available, but is becoming
increasingly popular. The Internet browser is rapidly becoming the universal front end. Web-
based email enables users to access all of their email—both incoming and saved messages—
through a Web browser. Not only does this free the user from installing and configuring an email
client on his or her workstation, but it also means that the user can easily access email from any
workstation virtually anywhere in the world. Undoubtedly, many people are accessing free Webbased email from work for their personal use.
It also shouldn't come as a surprise that free email services are being used by some people to
hide their identities. For the novice computer criminal, these services appear to be an easy way to
hide their identity, and by adding at least one more server and involving another service
provider, it certainly does complicate the association of a mail account with a specific person.
The only way to find out who the ISP thinks is using a specific email address is to obtain a
subpoena for the account information. If you are working with law enforcement agencies, they
can obtain a subpoena to facilitate their investigation, or you can obtain a subpoena from a
lawsuit (for more information, see Chapter 12). Fortunately, some providers of free email service
are including the originator's IP address in the header information. Previously, you would have to
subpoena the email provider and then the originating ISP to determine the originator. We
recommend issuing a subpoena for the email logs from the email provider, but at the same time,
you can also subpoena the originating ISP.
Reading the Mail Trail
When you are investigating a case involving email, you have to decipher email headers. If you
have never viewed a header before, it might first appear to be gibberish, but once you get the
hang of it and understand how SMTP works, it makes sense. The first annoyance you encounter
is that most client email programs hide the header information from the user. Depending on the
mail client you're using, you may have to do a little bit of digging to get to the header. In most
programs, if you click File|Properties, an option to view the header is displayed. If your
particular program provides a different way to access header information, consult the Help menu
and documentation or try the company's Web site for instructions.
Most users don't want to be bothered with deciphering email headers, which encourages the
email software vendors to make access to it as obscure as possible. Let's look at Microsoft
Outlook Express. It is a popular email program for many reasons, including the fact that it comes
free with Internet Explorer and recent versions of Windows.
As shown in Figure 2-5, the header information is available in Outlook Express by clicking on
File and then Properties, which displays the dialog box that looks like that shown in Figure 2-6.
Figure 2-5 Outlook Express File menu
Figure 2-6 Outlook Express Message Properties window
Figure 2-7 Viewing the message source in Outlook Express
The General Tab for the properties in Outlook Express displays some basics about the message
such as the subject of the message, the apparent sender, and the date and time sent and received.
Click on the Details tab to display the information like that shown in Figure 2-6. By examining
the headers of this message, it is clear that both the from address ([email protected]) and the
Reply-To address are fake addresses (another_test@test. org). This is a real message that we sent
from the Internet, but before sending the message, we first changed the From address to "HTCIA
NE Chapter." The From address is completely arbitrary—it contains whatever the sender
configures into their email program.
The most important tracks are found at the top of the message. In this case, the first line shows
the computer that the message was originally sent from. While the name of the PC, "mypc," can
easily be spoofed, the IP address that mypc was assigned when we logged on to the ISP is much
more difficult to spoof. While it is not impossible to spoof an IP address, we are not aware of a
case in which one has been spoofed to counterfeit email. The practical details involved in
spoofing an IP address make it virtually impossible in an email transaction, which involves
several round trips between the SMTP server and the connecting system. (Do be aware, though,
that the actual sender of the message could have cracked the system from which it was sent, and
logged on as somebody else.) In this case, the email was sent from a computer in the same
domain, monmouth.com, as the SMTP server that relayed the mail, shell.monmouth.com. Do a
whois on the IP address and see if you get something that matches the purported domains of both
the originating client and the relay server. Then follow up using the Dig w/AXFR advanced
query, as shown in Figure 2-8, using NetscanTools.
Figure 2-8 Using NetScanTools to investigate an IP address
In contrast to Outlook Express, Microsoft Outlook (included with Microsoft Office) places the
full email header information in an obscure position. As shown in Figure 2-9, to view the header
information, you click on View and then Options. Clicking on Message Header seems to be a
more obvious way to access header information—a mistake that we make all the time—but all
that does is hide the To, From, and Subject lines from the message itself. It does not show you
the detailed header information that you need to track an intruder. By clicking on Options, you
access the Message Options window shown in Figure 2-10.
Figure 2-9 Outlook View menu
Figure 2-10 Viewing a message header in Microsoft Outlook
You've probably already noticed "Joe Anonymous" in the Have replies sent to field. We faked
this deliberately to illustrate how you cannot believe everything you read. The only way to
extract this information from this window is to select it all (hint: try Control-A), copy it, and then
paste it into a text document, which we've done in the following:
Received: from hoemlsrv.firewall.lucent.com ([192.11.226.161]) by
nj7460exch002h.wins.lucent.com with SMTP (Microsoft Exchange
Internet Mail Service Version 5.5.2448.0) id W4VCF23A; Sat, 20
Nov 1999 21:19:10 –0500
Received: from hoemlsrv.firewall.lucent.com (localhost [127.0.0.1]) by
hoemlsrv.firewall.lucent.com (Pro-8.9.3/8.9.3) with ESMTP id
VAA06660 for <[email protected]>; Sat, 20 Nov 1999
21:19:10 –0500 (EST)
Received: from shell.?nmouth.com (shell.?onmouth.com [205.231.236.9])
by hoemlsrv.firewall.lucent.com (Pro-8.9.3/8.9.3) with ESMTP id
VAA06652 for <[email protected]>; Sat, 20 Nov 1999 21:19:09
–0500 (EST)
Received: from mypc (bg-tc-ppp961.?onmouth.com [209.191.51.149]) by
shell.?onmouth.com (8.9.3/8.9.3) with SMTP id VAA01448 for
<[email protected]>; Sat, 20 Nov 1999 21:17:06 –0500 (EST)
Message-ID: <001401bf33c6$b7f214e0$9533bfd1@mypc>
Reply-To: "Joe Anonymous" <[email protected]>
From: "Joe Anonymous" <[email protected]>
To: <[email protected]>
Subject: test from outlook express
Date: Sat, 20 Nov 1999 21:18:35 –0500
MIME-Version: 1.0 Content-Type: text/plain;
charset="iso-8859–1"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 4.72.3155.0
X-MimeOLE: Produced By Microsoft MimeOLE V4.72.3155.0
This header is longer than it was in our first example. This time, the message was relayed
through four different servers. Each time an SMTP server accepts a message, it places a new
Received header at the top of message before forwarding it on to another server or a user
mailbox. In the first message, we intentionally sent the message from our ISP account to our ISP
account. The second message originated externally and then was relayed through the Lucent
firewall and to the mail server used to host our mailbox. But even with the extra headers, it is
still apparent that the original message was received from: "mypc," and our address at the time
was:?onmouth.com [209.191.51.149]. The lines in the header tell a story about where the email
message has been, when it was there, and when it was delivered to its destination. It may be a
contrived story, but it is still a story. Virtually any of the headers with the exception of the
topmost one could be bogus—it is up to you to verify each one of them and determine the actual
history of the message.
The last example that we will look at is from Eudora, another popular email client.7 Eudora hides
the header information by default, like the rest of the client programs, but as you can see from
the Eudora Lite example in Figure 2-11, the full header is only a mouse-click away. A helpful
piece of information is X-Sender, which shows the mail server and the account the message was
sent from. One of the quirky characteristics of Eudora is that the icon is labeled "Blah, Blah,
Blah." Strange label, but it provides the information we need. When you click on the Blah
button, your email message changes from that shown in Figure 2-11 to something that looks like
that shown in Figure 2-12.
Figure 2-11 Viewing the X-Sender header on Eudora Lite
Figure 2-12 Viewing mail headers with Eudora
When you are conducting an investigation involving email, even if the computer's name is
bogus, you have the IP address that the user was assigned. If you trust the ISP, or the company
that the address resolves to, you can ask for their assistance in identifying your suspect. They
probably won't disclose the information to you immediately, but by noting that IP address, along
with the exact date and time that the message was sent, you can ask the ISP to save the logs until
you have a chance to get a court order. As long as the logs are still available, the ISP or other
organization should be able to identify the user that was assigned that IP address at the time the
message was sent.
Look at the two headers the arrows are pointing to in Figure 2-13. Compare the domain name in
the Received header, "monmouth.com," to the domain in the From header, "test.org." Because
they do not match, we can assume that the user configured his or her email incorrectly or that the
user is trying to hide his or her identity. A message can be injected anywhere in the chain of
Received headers—you can be sure that only the topmost one is accurate. Do an nslookup
against each domain—especially the purportedly original domain—and see if they exist. Do a
whois against each of those domains to find out who the administrator is and contact that person.
Keep in mind that if the administrator is the originator of a phony or illegal message, he or she
probably won't be inclined to cooperate.
Figure 2-13 Extended email header with discrepancies in originator fields
When you are investigating a case on behalf of a victim, but you can't visit the victim or
otherwise obtain the original message on your own, it is possible for the victim to email you a
copy of it. You must give the victim very specific instructions on the appropriate way to send the
mail to you—especially if the victim usually deletes messages right after receiving them. Ask the
victim to send the message as an attachment and not to forward the message. Forwarding
replaces the suspect's information with your victim's information. You might want to ask your
victim not to delete the original message until he or she hears from you.
5. Cell Phone Forensics
Mobile device forensics is a branch of digital forensics relating to recovery of digital
evidence or data from a mobile device under forensically sound conditions. The phrase mobile
device usually refers to mobile phones; however, it can also relate to any digital device that has
both internal memory and communication ability, including PDA devices, GPS devices
and tablet computers.
The use of phones in crime was widely recognized for some years, but the forensic study of
mobile devices is a relatively new field, dating from the early 2000s and late 1990s. A
proliferation of phones (particularly smartphones) and other digital devices on the consumer
market caused a demand for forensic examination of the devices, which could not be met by
existing computer forensics techniques.
Mobile devices can be used to save several types of personal information such as contacts,
photos, calendars and notes, SMS and MMS messages. Smartphones may additionally contain
video, email, web browsing information, location information, and social networking messages
and contacts.
There is growing need for mobile forensics due to several reasons and some of the prominent
reasons are:

Use of mobile phones to store and transmit personal and corporate information

Use of mobile phones in online transactions

Law enforcement, criminals and mobile phone devices

Mobile device forensics can be particularly challenging on a number of levels:
Evidential and technical challenges exist. for example, cell site analysis following from the use
of a mobile phone usage coverage, is not an exact science. Consequently, whilst it is possible to
determine roughly the cell site zone from which a call was made or received, it is not yet
possible to say with any degree of certainty, that a mobile phone call emanated from a specific
location e.g. a residential address.

To remain competitive, original equipment manufacturers frequently change mobile phone
form factors, operating system file structures, data storage, services, peripherals, and even
pin connectors and cables. As a result, forensic examiners must use a different forensic
process compared to computer forensics.

Storage capacity continues to grow thanks to demand for more powerful "minicomputer"
type devices.

Not only the types of data but also the way mobile devices are used constantly evolve.

Hibernation behavior in which processes are suspended when the device is powered off or
idle but at the same time, remaining active.

As a result of these challenges, a wide variety of tools exist to extract evidence from mobile
devices; no one tool or method can acquire all the evidence from all devices. It is therefore
recommended that forensic examiners, especially those wishing to qualify as expert
witnesses in court, undergo extensive training in order to understand how each tool and
method acquires evidence; how it maintains standards for forensic soundness; and how it
meets legal requirements such as the Daubert standard or Frye standard.
As a field of study forensic examination of mobile devices dates from the late 1990s and early
2000s. The role of mobile phones in crime had long been recognized by law enforcement. With
the increased availability of such devices on the consumer market and the wider array of
communication platforms they support (e.g. email, web browsing) demand for forensic
examination grew.
Early efforts to examine mobile devices used similar techniques to the first computer forensics
investigations: analyzing phone contents directly via the screen and photographing important
content. However, this proved to be a time-consuming process, and as the number of mobile
devices began to increase, investigators called for more efficient means of extracting data.
Enterprising mobile forensic examiners sometimes used cell phone or PDA synchronization
software to "back up" device data to a forensic computer for imaging, or sometimes, simply
performed computer forensics on the hard drive of a suspect computer where data had been
synchronized. However, this type of software could write to the phone as well as reading it, and
could not retrieve deleted data. Some forensic examiners found that they could retrieve even
deleted data using "flasher" or "twister" boxes, tools developed by OEMs to "flash" a phone's
memory for debugging or updating. However, flasher boxes are invasive and can change data;
can be complicated to use; and, because they are not developed as forensic tools, perform neither
hash verifications nor (in most cases) audit trails. For physical forensic examinations, therefore,
better alternatives remained necessary.
To meet these demands, commercial tools appeared which allowed examiners to recover phone
memory with minimal disruption and analyse it separately. Over time these commercial
techniques have developed further and the recovery of deleted data from proprietary mobile
devices has become possible with some specialist tools. Moreover, commercial tools have even
automated much of the extraction process, rendering it possible even for minimally trained first
responders—who currently are much more likely to encounter suspects with mobile devices in
their possession, compared to computers—to perform basic extractions for triage and data
preview purposes.
Professional applications
Mobile device forensics is best known for its application to law enforcement investigations, but
it is also useful for military intelligence, corporate investigations, private investigations, criminal
and civil defense, and electronic discovery.
Types of evidence
As mobile device technology advances, the amount and types of data that can be found on
a mobile device is constantly increasing. Evidence that can be potentially recovered from a
mobile phone may come from several different sources, including handset memory, SIM card,
and attached memory cards such as SD cards.
Traditionally mobile phone forensics has been associated with
recovering SMS and MMS messaging, as well as call logs, contact lists and
phone IMEI/ESN information. However, newer generations of smartphones also include wider
varieties of information; from web browsing, Wireless network settings, geolocation information
(including geotags contained within image metadata), e-mail and other forms of rich internet
media, including important data—such as social networking service posts and contacts—now
retained on smartphone 'apps'.
Internal memory
Nowadays mostly flash memory consisting of NAND or NOR types are used for mobile devices.
External memory
External memory devices are SIM cards, SD cards (commonly found within GPS devices as well
as mobile phones), MMC cards, CF cards, and the Memory Stick.
Service provider logs
Although not technically part of mobile device forensics, the call detail records (and
occasionally, text messages) from wireless carriers often serve as "back up" evidence obtained
after the mobile phone has been seized. These are useful when the call history and/or text
messages have been deleted from the phone, or when location-based services are not turned on.
Call detail records and cell site (tower) dumps can show the phone owner's location, and whether
they were stationary or moving (i.e., whether the phone's signal bounced off the same side of a
single tower, or different sides of multiple towers along a particular path of travel). Carrier data
and device data together can be used to corroborate information from other sources, for
instance, video surveillance footage or eyewitness accounts; or to determine the general location
where a non-geotagged image or video was taken.
The European Union requires its member countries to retain certain telecommunications data for
use in investigations. This includes data on calls made and retrieved. The location of a mobile
phone can be determined and this geographical data must also be retained. In the United States,
however, no such requirement exists, and no standards govern how long carriers should retain
data or even what they must retain. For example, text messages may be retained only for a week
or two, while call logs may be retained anywhere from a few weeks to several months. To reduce
the risk of evidence being lost, law enforcement agents must submit a preservation letter to the
carrier, which they then must back up with a search warrant.
Forensic process
Main article: digital forensic process
The forensics process for mobile devices broadly matches other branches of digital forensics;
however, some particular concerns apply. Generally, the process can be broken down into three
main categories: seizure, acquisition, and examination/analysis. Other aspects of the computer
forensic process, such as intake, validation, documentation/reporting, and archiving still apply.
Seizure
Seizing mobile devices is covered by the same legal considerations as other digital media.
Mobiles will often be recovered switched on; as the aim of seizure is to preserve evidence, the
device will often be transported in the same state to avoid a shutdown, which would change
files. In addition, the investigator or first responder would risk user lock activation.
However, leaving the phone on carries another risk: the device can still make a network/cellular
connection. This may bring in new data, overwriting evidence. To prevent a connection, mobile
devices will often be transported and examined from within a Faraday cage (or bag). Even so,
there are two disadvantages to this method. First, it renders the device unusable, as its touch
screen or keypad cannot be used. Second, a device's search for a network connection will drain
its battery more quickly. While devices and their batteries can often be recharged, again, the
investigator risks that the phone's user lock will have activated. Therefore, network isolation is
advisable either through placing the device in Airplane Mode, or cloning its SIM card (a
technique which can also be useful when the device is missing its SIM card entirely).
Acquisition
iPhone in an RF shield bag
RTL Aceso, a mobile device acquisition unit
The second step in the forensic process is acquisition, in this case usually referring to retrieval of
material from a device (as compared to the bit-copy imaging used in computer forensics).
Due to the proprietary nature of mobiles it is often not possible to acquire data with it powered
down; most mobile device acquisition is performed live. With more advanced smartphones using
advanced memory management, connecting it to a recharger and putting it into a faraday cage
may not be good practice. The mobile device would recognize the network disconnection and
therefore it would change its status information that can trigger the memory manager to write
data.
Most acquisition tools for mobile devices are commercial in nature and consist of a hardware and
software component, often automated.
Examination and analysis
As an increasing number of mobile devices use high-level file systems, similar to the file systems
of computers, methods and tools can be taken over from hard disk forensics or only need slight
changes.
The FAT file system is generally used on NAND memory. A difference is the block size used,
which is larger than 512 bytes for hard disks and depends on the used memory type,
e.g., NOR type 64, 128, 256 and NAND memory 16, 128, 256, or 512 kilobyte.
Different software tools can extract the data from the memory image. One could use specialized
and automated forensic software products or generic file viewers such as any hex editor to search
for characteristics of file headers. The advantage of the hex editor is the deeper insight into the
memory management, but working with a hex editor means a lot of handwork and file system as
well as file header knowledge. In contrast, specialized forensic software simplifies the search and
extracts the data but may not find everything. Access Data, Sleuthkit, and En Case, to mention
only some, are forensic software products to analyze memory images. Since there is no tool that
extracts all possible information, it is advisable to use two or more tools for examination. There
is currently (February 2010) no software solution to get all evidences from flash memories. Data
acquisition types
Mobile device data extraction can be classified according to a continuum, along which methods
become more technical and “forensically sound,” tools become more expensive, analysis takes
longer, examiners need more training, and some methods can even become more invasive.[15]
Manual acquisition
The examiner utilizes the user interface to investigate the content of the phone's memory.
Therefore, the device is used as normal, with the examiner taking pictures of each screen's
contents. This method has an advantage in that the operating system makes it unnecessary to use
specialized tools or equipment to transform raw data into human interpretable information. In
practice this method is applied to cell phones, PDAs and navigation systems.[16] Disadvantages
are that only data visible to the operating system can be recovered; that all data are only available
in form of pictures; and the process itself is time-consuming.
Logical acquisition
Logical acquisition implies a bit-by-bit copy of logical storage objects (e.g., directories and files)
that reside on a logical storage (e.g., a file system partition). Logical acquisition has the
advantage that system data structures are easier for a tool to extract and organize. Logical
extraction acquires information from the device using the original equipment manufacturer
application programming interface for synchronizing the phone's contents with a personal
computer. A logical extraction is generally easier to work with as it does not produce a
large binary blob. However, a skilled forensic examiner will be able to extract far more
information from a physical extraction.
File system acquisition
Logical extraction usually does not produce any deleted information, due to it normally being
removed from the phone's file system. However, in some cases—particularly with platforms
built on SQLite, such as iOS and Android—the phone may keep a database file of information
which does not overwrite the information but simply marks it as deleted and available for later
overwriting. In such cases, if the device allows file system access through its synchronization
interface, it is possible to recover deleted information. File system extraction is useful for
understanding the file structure, web browsing history, or app usage, as well as providing the
examiner with the ability to perform an analysis with traditional computer forensic tools.
Physical acquisition
Physical acquisition implies a bit-for-bit copy of an entire physical store (e.g. flash memory);
therefore, it is the method most similar to the examination of a personal computer. A physical
acquisition has the advantage of allowing deleted files and data remnants to be examined.
Physical extraction acquires information from the device by direct access to the flash memories.
Generally this is harder to achieve because the device original equipment manufacturer needs to
secure against arbitrary reading of memory; therefore, a device may be locked to a certain
operator. To get around this security, mobile forensics tool vendors often develop their own boot
loaders, enabling the forensic tool to access the memory (and often, also to bypass user
passcodes or pattern locks).
Generally, the physical extraction is split into two steps, the dumping phase and the decoding
phase.
Brute force acquisition
Brute force acquisition can be performed by 3rd party passcode brute force tools that send a
series of passcodes / passwords to the mobile device. This is a time consuming method, but
effective nonetheless. Brute forcing tools are connected to the device and will physically send
codes on ios devices starting from 0000 to 9999 in sequence until the correct code is successfully
entered. Once the code entry has been successful, full access to the device is given and data
extraction can commence.
Tools
List of digital forensics tools § Mobile device forensics
Early investigations consisted of live manual analysis of mobile devices; with examiners
photographing or writing down useful material for use as evidence. Without forensic
photography equipment such as Fernico ZRT, EDEC Eclipse, or Project-a-Phone, this had the
disadvantage of risking the modification of the device content, as well as leaving many parts of
the proprietary operating system inaccessible.
In recent years a number of hardware/software tools have emerged to recover logical and
physical evidence from mobile devices. Most tools consist of both hardware and software
portions. The hardware includes a number of cables to connect the phone to the acquisition
machine; the software exists to extract the evidence and, occasionally even to analyse it.
Most recently, mobile device forensic tools have been developed for the field. This is in response
both to military units' demand for fast and accurate anti-terrorism intelligence, and to law
enforcement demand for forensic previewing capabilities at a crime scene, search warrant
execution, or exigent circumstances. Such mobile forensic tools are often ruggedized for harsh
environments (e.g. the battlefield) and rough treatment (e.g. being dropped or submerged in
water).
Generally, because it is impossible for anyone tool to capture all evidence from all mobile
devices, mobile forensic professionals recommend that examiners establish entire toolkits
consisting of a mix of commercial, open source, broad support, and narrow support forensic
tools, together with accessories such as battery chargers, Faraday bags or other signal disruption
equipment, and so forth.
ome current tools include Cellebrite UFED, Susteen Secure View and Micro Systemation XRY.
Some tools have additionally been developed to address increasing criminal usage of phones
manufactured with Chinese chipsets, which include MediaTek (MTK), Spreadtrum and MStar.
Such tools include Cellebrite's CHINEX, and XRY PinPoint.
Open source
Most open source mobile forensics tools are platform-specific and geared toward smartphone
analysis. Though not originally designed to be a forensics tool, BitPim has been widely used on
CDMA phones as well as LG VX4400/VX6000 and many Sanyo Sprint cell phones.[21]
Physical tools[edit]
Forensic desoldering
Commonly referred to as a "Chip-Off" technique within the industry, the last and most intrusive
method to get a memory image is to desolder the non-volatile memory chip and connect it to a
memory chip reader. This method contains the potential danger of total data destruction: it is
possible to destroy the chip and its content because of the heat required during desoldering.
Before the invention of the BGA technology it was possible to attach probes to the pins of the
memory chip and to recover the memory through these probes. The BGA technique bonds the
chips directly onto the PCB through molten solder balls, such that it is no longer possible to
attach probes.
Here you can see that moisture in the circuit board turned to steam when it was subjected to
intense heat. This produces the so-called "popcorn effect."
Desoldering the chips is done carefully and slowly, so that the heat does not destroy the chip or
data. Before the chip is desoldered the PCB is baked in an oven to eliminate remaining water.
This prevents the so-called popcorn effect, at which the remaining water would blow the chip
package at desoldering.
There are mainly three methods to melt the solder: hot air, infrared light, and steam-phasing. The
infrared light technology works with a focused infrared light beam onto a specific integrated
circuit and is used for small chips. The hot air and steam methods cannot focus as much as the
infrared technique.
Chip re-balling
After desoldering the chip a re-balling process cleans the chip and adds new tin balls to the chip.
Re-balling can be done in two different ways.

The first is to use a stencil. The stencil is chip-dependent and must fit exactly. Then the tinsolder is put on the stencil. After cooling the tin the stencil is removed and if necessary a
second cleaning step is done.

The second method is laser re-balling. Here the stencil is programmed into the re-balling
unit. A bond head (looks like a tube/needle) is automatically loaded with one tin ball from a
solder ball singulation tank. The ball is then heated by a laser, such that the tin-solder ball
becomes fluid and flows onto the cleaned chip. Instantly after melting the ball the laser turns
off and a new ball falls into the bond head. While reloading the bond head of the re-balling
unit changes the position to the next pin.
A third method makes the entire re-balling process unnecessary. The chip is connected to an
adapter with Y-shaped springs or spring-loaded pogo pins. The Y-shaped springs need to have a
ball onto the pin to establish an electric connection, but the pogo pins can be used directly on the
pads on the chip without the balls.
The advantage of forensic desoldering is that the device does not need to be functional and that a
copy without any changes to the original data can be made. The disadvantage is that the reballing devices are expensive, so this process is very costly and there are some risks of total data
loss. Hence, forensic desoldering should only be done by experienced laboratories.
JTAG
Existing standardized interfaces for reading data are built into several mobile devices, e.g., to get
position data from GPS equipment (NMEA) or to get deceleration information from airbag units.
Not all mobile devices provide such a standardized interface nor does there exist a standard
interface for all mobile devices, but all manufacturers have one problem in common. The
miniaturizing of device parts opens the question how to automatically test the functionality and
quality of the soldered integrated components. For this problem an industry group, the Joint Test
Action Group (JTAG), developed a test technology called boundary scan.
Despite the standardization there are four tasks before the JTAG device interface can be used to
recover the memory. To find the correct bits in the boundary scan register one must know which
processor and memory circuits are used and how they are connected to the system bus. When not
accessible from outside one must find the test points for the JTAG interface on the printed circuit
board and determine which test point is used for which signal. The JTAG port is not always
soldered with connectors, such that it is sometimes necessary to open the device and re-solder the
access port. The protocol for reading the memory must be known and finally the correct voltage
must be determined to prevent damage to the circuit.
The boundary scan produces a complete forensic image of the volatile and non-volatile memory.
The risk of data change is minimized and the memory chip doesn't have to be desoldered.
Generating the image can be slow and not all mobile devices are JTAG enabled. Also, it can be
difficult to find the test access port.
Command line tools
System commands
Mobile devices do not provide the possibility to run or boot from a CD, connecting to a network
share or another device with clean tools. Therefore, system commands could be the only way to
save the volatile memory of a mobile device. With the risk of modified system commands it
must be estimated if the volatile memory is really important. A similar problem arises when no
network connection is available and no secondary memory can be connected to a mobile device
because the volatile memory image must be saved on the internal non-volatile memory, where
the user data is stored and most likely deleted important data will be lost. System commands are
the cheapest method, but imply some risks of data loss. Every command usage with options and
output must be documented.
AT commands
AT commands are old modem commands, e.g., Hayes command set and Motorola phone AT
commands, and can therefore only be used on a device that has modem support. Using these
commands one can only obtain information through the operating system, such that no deleted
data can be extracted.
For external memory and the USB flash drive, appropriate software, e.g., the Unix command dd,
is needed to make the bit-level copy. Furthermore, USB flash drives with memory protection do
not need special hardware and can be connected to any computer. Many USB drives and memory
cards have a write-lock switch that can be used to prevent data changes, while making a copy.
If the USB drive has no protection switch, a blocker can be used to mount the drive in a readonly mode or, in an exceptional case, the memory chip can be desoldered. The SIM and memory
cards need a card reader to make the copy. The SIM card is soundly analyzed, such that it is
possible to recover (deleted) data like contacts or text messages.[11]
The Android operating system includes the dd command. In a blog post on Android forensic
techniques, a method to live image an Android device using the dd command is demonstrated.
Non-forensic commercial tools
Flasher tools
A flasher tool is programming hardware and/or software that can be used to program (flash) the
device memory, e.g., EEPROM or flash memory. These tools mainly originate from the
manufacturer or service centers for debugging, repair, or upgrade services. They can overwrite
the non-volatile memory and some, depending on the manufacturer or device, can also read the
memory to make a copy, originally intended as a backup. The memory can be protected from
reading, e.g., by software command or destruction of fuses in the read circuit. Note, this would
not prevent writing or using the memory internally by the CPU. The flasher tools are easy to
connect and use, but some can change the data and have other dangerous options or do not make
a complete copy.
Controversies
In general, there exists no standard for what constitutes a supported device in a specific product.
This has led to the situation where different vendors define a supported device differently. A
situation such as this makes it much harder to compare products based on vendor provided lists
of supported devices. For instance, a device where logical extraction using one product only
produces a list of calls made by the device may be listed as supported by that vendor while
another vendor can produce much more information.
Furthermore, different products extract different amounts of information from different devices.
This leads to a very complex landscape when trying to overview the products. In general, this
leads to a situation where testing a product extensively before purchase is strongly
recommended. It is quite common to use at least two products which complement each other.
Mobile phone technology is evolving at a rapid pace. Digital forensics relating to mobile devices
seems to be at a stand still or evolving slowly. For mobile phone forensics to catch up with
release cycles of mobile phones, more comprehensive and in depth framework for evaluating
mobile forensic toolkits should be developed and data on appropriate tools and techniques for
each type of phone should be made available a timely manner. Anti-forensics is more difficult
because of the small size of the devices and the user's restricted data accessibility. Nevertheless,
there are developments to secure the memory in hardware with security circuits in the CPU and
memory chip, such that the memory chip cannot be read even after desoldering.
6. Router Forensics
Reconnaissance is considered the first pre attack phase. The hacker seeks to find out as much
information as possible about the victim. The second pre attack phase is scanning and
enumeration. At this step in the methodology, the hacker is moving from passive information
gathering to active information gathering. Access can be gained in many different ways. A
hacker may exploit a router vulnerability or maybe social engineer the help desk into giving him
a phone number for a modem. Access could be gained by finding vulnerability in the web
server’s software. Just having the access of an average user account probably won’t give the
attacker very much control or access to the network. Therefore, the attacker will attempt to
escalate himself to administrator or root privilege. Once escalation of privilege is complete the
attacker will work on ways to maintain access to the systems he or she has attacked and
compromised. Hackers are much like other criminals in that they would like to make sure and
remove all evidence of their activities, which might include using root kits to cover their tracks.
This is the moment at which most forensic activities begin.
Searching for Evidence
You must be knowledgeable of each of the steps of the hacking process and understand the
activities and motives of the hacker. You many times will be tasked with using only pieces of
information and playing the role of a detective in trying to reassemble the pieces of the puzzle.
Information stored within a computer can exist in only one or more predefined areas.
Information can be stored as a normal file, deleted file, hidden file, or in the slack or free space.
Understanding these areas, how they work, and how they can be manipulated will increase the
probability that you will find or discover hidden data. Not all suspects you encounter will be
super cyber criminals. Many individuals will not hide files at all; others will attempt simple file
hiding techniques. You may discover cases where suspects were overcome with regret, fear, or
remorse, and attempted to delete or erase incriminating evidence after the incident. Most average
computer users don’t understand that to drop an item in the recycle bin doesn’t mean that it is
permanently destroyed. One common hiding technique is to place the information in an obscure
location such as C:\winnt\system32\os2\dll. Again, this will usually block the average user from
finding the file. The technique is simply that of placing the information in an area of the drive
where you would not commonly look. A system search will quickly defeat this futile attempt at
data hiding. Just search for specific types of files such as bmp, tif, doc, and xls. Using the search
function built into Windows will help quickly find this type of information. If you are examining
a Linux computer, use the grep command to search the drive. Another technique is using file
attributes to hide the files or folders. On a Macintosh computer, you can hide a file with the
ResEdit utility. In the wonderful world of Windows, file attributes can be configured to hide files
at the command line with the attrib command. This command is built into the Windows OS. It
allows a user to change the properties of a file. Someone could hide a file by issuing attrib +h
secret.txt. This command would render the file invisible in the command line environment. This
can also be accomplished through the GUI by right-clicking on a file and choosing the hidden
type. Would the file then be invisible in the GUI? Well, that depends on the view settings that
have been configured. Open a browse window and choose tools/folder options/view/show hidden
files; then, make sure Show Hidden Files is selected. This will display all files and folders, even
those with the +h attribute set. Another way to get a complete listing of all hidden files is to issue
the command attrib /s > attributes.txt from the root directory. The attrib command lists file
attributes, the /s function list all files in all the subdirectories, and > redirects the output to a text
file. This text file can then be parsed and placed in a spreadsheet for further analysis. Crude
attempts such as these can be quickly surmounted.
An Overview of Routers
Routers are a key piece of networking gear. Let’s know the role and function of a router.
What Is a Router?
Routers can be hardware or software devices that route data from a local area network to a
different network. Routers are responsible for making decisions about which of several paths
network (or Internet) traffic will follow. If more than one path is available to transmit data, the
router is responsible for determining which path is the best path to route the information.
The Function of a Router
Routers also act as protocol translators and bind dissimilar networks. Routers limit physical
broadcast traffic as they operate at layer 3 of the OSI model. Routers typically use either link
state or hop count based routing protocols to determine the best path.
The Role of a Router
Routers are found at layer three of the OSI model. This is known as the networking layer. The
network layer provides routing between networks and defines logical addressing, error handling,
congestion control, and packet sequencing. This layer is concerned primarily with how to get
packets from network A to network B. This is where IP addresses are defined. These addresses
give each device on the network a unique (logical) address. Routers organize these addresses into
classes, which are used to determine how to move packets from one network to another. All
types of protocols rely on routing to move information from one point to another. This includes
IP, Novell’s IPX, and Apple’s DDP. Routing on the Internet typically is performed dynamically;
however, setting up static routes is a form of basic routing. Dynamic routing protocols constantly
look for the best route to move information from the source to target network.
Routing Tables
Routers are one of the basic building blocks of networks, as they connect networks together.
Routers reside at layer 3 of the OSI model. Each router has two or more interfaces. These
interfaces join separate networks together. When a router receives a packet, it examines the IP
address and determines to which interface the packet should be forwarded. On a small or
uncomplicated network, an administrator may have defined a fixed route that all traffic will
follow. More complicated networks typically route packets by observing some form of metric.
Routing tables include the following type of information:
■ Bandwidth This is a common metric based on the capacity of a link. If all
other metrics were equal, the router would choose the path with the highest
bandwidth.
■ Cost The organization may have a dedicated T1 and an ISDN line. If the
ISDN line has a higher cost; traffic will be routed through the T1.
■ Delay This is another common metric, as it can build on many factors
including router queues, bandwidth, and congestion.
■ Distance This metric is calculated in hops; that is, how many routers away
is the destination.
■ Load This metric is a measurement of the load that is being placed on a
particular router. It can be calculated by examining the processing time or
CPU utilization.
■ Reliability This metric examines arbitrary reliability ratings. Network
administrators can assign these numeric values to various links.
By applying this metric and consulting the routing table, the routing protocol can make a best
path determination. At this point, the packet is forwarded to the next hop as it continues its
journey toward the destination.
Router Architecture
Router architecture is designed so that routers are equipped to perform two main functions:
process routable protocols and use routing protocols to determine best path. Let’s start by
reviewing routable protocols. The best example of a routed protocol is IP. A very basic definition
of IP is that it acts as the postman of the Internet—its job is to organize data into a packet, which
is then addressed for delivery. IP must place a target and source address on the packet.This is
similar to addressing a package before delivering it to the post office. In the world of IP, the
postage is a TTL (Time-to-Live), which keeps packets from traversing the network forever. If the
recipient cannot be found, the packet can eventually be discarded. All the computers on the
Internet have an IP address. If we revert to our analogy of the postal system, an IP address can be
thought of as the combination of a zipcode and street address. The first half of the IP address is
used to identify the proper network; the second portion of the IP address identifies the host.
Combined, this allows us to communicate with any network and any host in the world that is
connected to the Internet. Now let us turn our attention to routing protocols.
Routing Protocols
Routing protocols fall into two basic categories, static and dynamic. Static, or fixed, routing is
simply a table that has been developed by a network administrator mapping one network to
another. Static routing works best when a network is small and the traffic is predictable. The big
problem with static routing is that it cannot react to network changes. As the network grows,
management of these tables can become difficult. Although this makes static routing unsuitable
for use on the Internet or large networks, it can be used in special circumstances where normal
routing protocols do not function well.
Dynamic routing uses metrics to determine what path a router should use to send a packet toward
its destination. Dynamic routing protocols include Routing Information Protocol (RIP), Border
Gateway Protocol (BGP), Interior Gateway Routing Protocol (IGRP), and Open Shortest Path
First (OSPF). Dynamic routing can be divided into two broad categories: link-state or distance
vector dynamic routing protocols, which are discussed in greater detail later in the chapter.
RIP
RIP is the most common routing protocol that uses a hop count as its primary routing metric. RIP
is considered a distance vector protocol. The basic methodology of a distance vector protocol is
to make a decision on what is the best route by determining the shortest path. The shortest path is
commonly calculated by hops. Distance vector routing is also called routing by rumor.
Head of the Class…
What Is a Hop Count?
A hop count is the number of routers that a packet must pass through to reach it destination. Each
time a packet passes through a router, the cost is one hop. So, if the target network you are trying
to reach is two routers away, it is also two hops away. The major shortcoming of distance vector
protocols is that the path with the lowest number of hops may not be the optimum route. The
lower hop count path may have considerably less bandwidth than the higher hop count route.
OSPF
OSPF is the most common link state routing protocol and many times, it is used as a replacement
to RIP. Link state protocols are properly called Dijkstra algorithms, as this is the computational
basis of their design. Link state protocols use the Dijkstra algorithm to calculate the best path to a
target network. The best path can be determined by one or more metrics such as hops, delay, or
bandwidth. Once this path has been determined, the router will inform other routers as to its
findings. This is how reliable routing tables are developed and routing tables reach convergence.
Link state routing is considered more robust than distance vector routing protocols. One reason
is because link state protocols have the ability to perform faster routing table updates.
NOTE
Convergence is the point at which routing tables have become synchronized. Each time a
network is added or dropped, the routing tables must again resynchronize. Routing algorithms
differ in the speed at which they can reach convergence.
Hacking Routers
Full control of a router can often lead to full control of the network. This is why many attackers
will target routers and launch attacks against them. These attacks may focus on configuration
errors, known vulnerabilities, or even weak passwords.
Router Attacks
Routers can be attacked by either gaining access to the router and changing the configuration
file, launching DoS attacks, flooding the bandwidth, or routing table poisoning. These attacks
can be either hit-and-run or persistent. Denial of Service attacks are targeted at routers. If an
attacker can force a router to stop forwarding packets, then all hosts behind the router are
effectively disabled.
Router Attack Topology
The router attack topology is the same as all attack topologies. The steps include:
1. Reconnaissance
2. Scanning and enumeration
3. Gaining access
4. Escalation of privilege
5. Maintaining access
6. Covering tracks and placing backdoors
Hardening Routers
The Router Audit Tool can be used to harden routers. Once downloaded, RAT checks them
against the settings defined in the benchmark. Each configuration is examined and given a rated
score that provides a raw overall score, a weighted overall score (1-10), and a list of IOS
commands that will correct any identified problems.
Denial-of-Service Attacks
Denial-of-service (DoS) attacks fall into three categories:
■ Destruction. Attacks that destroy the ability of the router to function.
■ Resource consumption. Flooding the router with many open connections
simultaneously.
■ Bandwidth consumption. Attacks that attempt to consume the bandwidth
capacity of the router’s network.
DoS attacks may target a user or an entire organization and can affect the availability
of target systems or the entire network. The impact of DoS is the disruption of normal operations
and the disruption of normal communications. It’s much easier for an attacker to accomplish this
than it is to gain access to the network in most instances. Smurf is an example of a common DoS
attack. Smurf exploits the Internet Control Message Protocol (ICMP) protocol by sending a
spoofed ping packet addressed to the broadcast address and has the source address listed as the
victim. On a multi access network, many systems may possibly reply. The attack results in the
victim being flooded in ping responses. Another example of a DoS attack is a SYN flood. A
SYN flood disrupts Transmission Control Protocol (TCP) by sending a large number of fake
packets with the SYN flag set. This large number of half-open TCP connections fills the buffer
on victim’s system and prevents it from accepting legitimate connections. Systems connected to
the Internet that provide services such as HTTP or SMTP are particular vulnerable. DDoS
attacks are the second type of DoS attack and are considered multiprotocol attacks. DDoS attacks
use ICMP, UDP, and TCP packets. One of the distinct differences between DoS and DDoS is
that a DDoS attack consists of two distinct phases. First, during the pre-attack, the hacker must
compromise computers scattered across the Internet and load software on these clients to aid in
the attack. Targets for such an attack include broadband users, home users, poorly configured
networks, colleges and universities. Script kiddies from around the world can spend countless
hours scanning for the poorly protected systems. Once this step is completed the second step can
commence. The second step is the actual attack. At this point the attacker instructs the masters to
communicate to the zombies to launch the attack. ICMP and UDP packets can easily be blocked
at the router, but TCP packets are difficult to mitigate. TCP-based DoS attacks comes in two
forms:
■ Connection-oriented. These attacks complete the 3-way handshake to
establish a connection. Source IP address can be determined here.
■ Connectionless. These packets SYN are difficult t trace because source
An example of a DDOS tool is Tribal Flood Network (TFN). TFN was the first publicly
available UNIX-based DDoS tool.TFN can launch ICMP, Smurf, UDP, and SYN flood
attacks.The master uses UDP port 31335 and TCP port 27665.TFN was followed by more
advanced DDoS attacks such as Trinoo. Closely related to TFN, this DDoS allows a user to
launch a coordinated UDP flood to the victim’s computer, which gets overloaded with traffic. A
typical Trinoo attack team includes just a few servers and a large number of client computers on
which the Trinoo daemon is running. Trinoo is easy for an attacker to use and is very powerful in
that one computer is instructing many Trinoo servers to launch a DoS attack against a particular
computer.
Routing Table Poisoning
Routers running RIPv1 are particularly vulnerable to routing table poisoning attacks. This type of
attack sends fake routing updates or modifies genuine route update packets to other nodes with
which the attacker attempts to cause a denial of service. Routing table poisoning may cause a
complete denial of service or result in suboptimal routing, or congestion in portions of the
network.
Hit-and-Run Attacks and Persistent Attacks
Attackers can launch one of two types of attacks, either-hit and-run or persistent. A hit-and-run
attack is hard to detect and isolate as the attacker injects only one or a few malformed packets.
With this approach, the attacker must craft the attacks so that the results have some lasting
damaging effect. A persistent attack increases the possibility for identification of the attacker as
there is an ongoing stream of packets to analyze. However, this attack lowers the level of
complexity needed by the attacker as they can use much less sophisticated attacks. Link state
routing protocols such as OSPF are more resilient to routing attacks than RIP.
Forensic Analysis of Routing Attacks
During a forensic investigation the analyst should examine log files for evidence such as IP
address and the protocol. It is a good idea to redirect logs to the syslog server. This can be
accomplished as follows:
#config terminal
Logging 192.168.1.1
Investigating Routers
When investigating routers there are a series of built-in commands that can be used for analysis.
It is unadvisable to reset the router as this may destroy evidence that was created by the attacker.
The following show commands can be used to gather basic information and record hacker
activity:
■ Show access list
■ Show clock
■ Show ip route
■ Show startup configuration
■ Show users
■ Show version
Chain of Custody
The chain of custody is used to prove the integrity of evidence. The chain of custody
should be able to answer the following questions:
■ Who collected the evidence?
■ How and where is the evidence stored?
■ Who took possession of the evidence?
■ How was the evidence stored and how was it protected during storage?
■ Who took the evidence out of storage and why?
There is no such thing as too much documentation. One good approach is to have two people
work on a case. While one person performs the computer analysis, the other documents these
actions. At the beginning of an investigation, a forensic analyst should prepare a log to document
the systematic process of the investigation. This is required to establish the chain of custody.
This chain of custody will document how the evidence is handled, how it is protected, what
process is used to verify it remains unchanged, and how it is duplicated. Next, the log must
address how the
media is examined, what actions are taken, and what tools are used. Automated tools such as
EnCase and The Forensic Toolkit compile much of this information for the investigator.
Volatility of Evidence
When responding to a network attack, obtaining volatile data should be collected as soon as
possible. Although all routers are different, you will most likely be working with Cisco products
as Cisco has the majority of the market share. Cisco routers store the current configuration in
nonvolatile ram (NVRAM). The current configuration is considered volatile data and the data is
kept in Random Access Memory (RAM). If the configuration is erased or the router powered
down all information is lost. Routers typically are used as a beachhead for an attack. This means
the router may play an active part in the intrusion. The attacker uses the router as a jumping off
point to other network equipment. When starting an investigation, you should always move from
most volatile to least volatile. The first step is to retrieve RAM and NVRAM. To accomplish
this, you may use a direct connection to the console port using RJ-45-RJ-45 rolled cable and an
RJ-45-to-DB-9 female DTE adapter. In instances when a direct connection is not available a
remoter session is the next preferred method. Insecure protocols such as FTP should not be used;
an encrypted protocol Secure Shell (SSH) is preferred. You should make sure to capture both
volatile and nonvolatile configurations for comparison changes and documentation purposes.
Cisco routers have multiple modes, so to gain privilege mode the password must be known by
the analyst.
Case Reports
Case reporting is one of the most important aspects of computer forensics. Just as with traditional
forensics everything should be documented. Reporting should begin the minute you are assigned
to a case. Although it may sometimes seem easier to blindly push forward, the failure to
document can result in poorly written reports that will not withstand legal scrutiny. Let’s face it,
not all aspects of computer forensics are exciting and fun. Most of us view paperwork as
drudgery. It is a somewhat tedious process that requires an eye for detail. Don’t allow yourself
this fallacy. In the end, the documentation you keep and the process you follow will either
validate or negate the evidence. The report is key in bringing together the three primary pieces of
forensics: acquisition, authentication,
and analysis.
The case report will be the key to determining one of the following actions:
■ Employee remediation
■ Employee termination
■ Civil proceedings
■ Criminal prosecution
When the investigation is complete a final written report is prepared. Some of the items found in
this report will include:
■ Case Summary
■ Case Audit Files
■ Bookmarks
■ Selected Graphics
■ File Location Path
■ File Location Properties
Although this is not an all-inclusive list it should give you some indication of what should be
included. Depending on the agency or corporation, the contents of the report will vary. What is
consistent is that anyone should be able to use the logs and the report to recreate the steps
performed throughout the investigation. This process of duplication should lead to identical
results.
Incident Response
Incident response is the effort of an organization to define and document the nature and scope of
a computer security incident. Incident response can be broken into three broad categories that
include:
■ Triage. Notification and identification
■ Action/Reaction. Containment, analysis, tracking
■ Follow up. Repair and recovery, prevention
Compromises
Before a compromise can be determined, investigators must be alerted that something
has happened. It is best if the alert function is automated as much as possible.
Otherwise, the sheer volume of log information would be overwhelming for an
employee. Even with a high level of automation someone must still make a judgment
regarding the validity of the alert. Once an attack has been validated it is important
to reduce the damage of the attack as quickly as possible and work to restore normal
business functions.
Summary
In this chapter, we reviewed how routers can play an important part in forensics. Readers were
introduced to routed protocols such as IP and we discussed how routed protocols work. In many
ways, IP acts as a “postman” since its job is to make the best effort at delivery. In a small
network or those that seldom change, the route that the IP datagrams take through the network
may remain static or unchanged. Larger networks use dynamic routing. Administrators use
routing protocols such as RIP for dynamic routing. We also looked at how attackers attack
routers and how incident response relates to routers and router compromises.
Overview of Routers
_ Routers are designed to connect dissimilar protocols.
_ Routers deal with routing protocols.
_ Common routing protocols include RIP and OSPF.
Hacking Routers
_ Routers can be attacked by exploiting misconfigurations or vulnerabilities.
_ Routers need to have logging enabled so sufficient traffic is captured to aid
in forensic investigations.
Incident Response
_ Monitoring for incidents requires both passive and active tasks.
_ Incident response requires development of a policy to determine the proper
response.
7. Introduction to Network Forensics and Investigating Logs
This chapter focuses on network forensics and investigating logs. It starts by defining network
forensics and describing the tasks associated with a forensic investigation. The chapter then covers
log files and their use as evidence. The chapter concludes with a discussion about time
synchronization.
Network Forensics
Network forensics is the capturing, recording, and analysis of network events in order to discover
the source of security attacks. Capturing network traffic over a network is simple in theory, but
relatively complex in practice.
This is because of the large amount of data that flows through a network and the complex nature
of Internet protocols. Because recording network traffic involves a lot of resources, it is often not
possible to record all of the data flowing through the network. An investigator needs to back up
these recorded data to free up recording media and to preserve the data for future analysis.
Analyzing Network Data
The analysis of recorded data is the most critical and most time-consuming task. Although there
are many automated analysis tools that an investigator can use for forensic purposes, they are not
sufficient, as there is no
foolproof method for discriminating bogus traffic generated by an attacker from genuine traffic.
Human judgment is also critical because with automated traffic analysis tools, there is always a
chance of a false positive.
An investigator needs to perform network forensics to determine the type of an attack over a
network and to trace out the culprit. The investigator needs to follow proper investigative
procedures so that the evidences recovered during investigation can be produced in a court of law.
Network forensics can reveal the following information:
• How an intruder entered the network
• The path of intrusion
• The intrusion techniques an attacker used
• Traces and evidence
Network forensics investigators cannot do the following:
• Solve the case alone
• Link a suspect to an attack
The Intrusion Process
Network intruders can enter a system using the following methods:
• Enumeration: Enumeration is the process of gathering information about a network that may help
an intruder attack the network. Enumeration is generally carried out over the Internet. The
following information is collected during enumeration:
• Topology of the network
• List of live hosts
• Network architecture and types of traffic (for example, TCP, UDP, and IPX)
• Potential vulnerabilities in host systems
• Vulnerabilities: An attacker identifies potential weaknesses in a system, network, and elements
of the network and then tries to take advantage of those vulnerabilities. The intruder can find
known vulnerabilities using various scanners.
• Viruses: Viruses are a major cause of shutdown of network components. A virus is a software
program written to change the behavior of a computer or other device on a network, without the
permission or knowledge of the user.
• Trojans: Trojan horses are programs that contain or install malicious programs on targeted
systems.
These programs serve as back doors and are often used to steal information from systems.
• E-mail infection: The use of e-mail to attack a network is increasing. An attacker can use e-mail
spamming and other means to flood a network and cause a denial-of-service attack.
• Router attacks: Routers are the main gateways into a network, through which all traffic passes.
A router attack can bring down a whole network.
• Password cracking: Password cracking is a last resort for any kind of attack.
Looking for Evidence
An investigator can find evidence from the following:
• From the attack computer and intermediate computers: This evidence is in the form of logs, files,
ambient data, and tools.
• From firewalls: An investigator can look at a firewall’s logs. If the firewall itself was the victim,
the investigator treats the firewall like any other device when obtaining evidence.
• From internetworking devices: Evidence exists in logs and buffers as available.
• From the victim computer: An investigator can find evidence in logs, files, ambient data, altered
configuration files, remnants of Trojaned files, files that do not match hash sets, tools, Trojans
and viruses, stored stolen files, Web defacement remnants, and unknown file extensions.
End-To-End Forensic Investigation
An end-to-end forensic investigation involves following basic procedures from beginning to end.
The following are some of the elements of an end-to-end forensic track:
• The end-to-end concept: An end-to-end investigation tracks all elements of an attack, including
how the attack began, what intermediate devices were used during the attack, and who was
attacked.
• Locating evidence: Once an investigator knows what devices were used during the attack, he or
she can search for evidence on those devices. The investigator can then analyze that evidence to
learn more about the attack and the attacker.
• Pitfalls of network evidence collection: Evidence can be lost in a few seconds during log analysis
because logs change rapidly. Sometimes, permission is required to obtain evidence from certain
sources, such as ISPs. This process can take time, which increases the chances of evidence loss.
Other pitfalls include the following:
• An investigator or network administrator may mistake normal computer or network activity for
attack activity.
• There may be gaps in the chain of evidence.
• Logs may be ambiguous, incomplete, or missing.
• Since the Internet spans the globe, other nations may be involved in the investigation. This can
create legal and political issues for the investigation.
• Event analysis: After an investigator examines all of the information, he or she correlates all of
the events and all of the data from the various sources to get the whole picture.
Log Files as Evidence
Log files are the primary recorders of a user’s activity on a system and of network activities. An
investigator can both recover any services altered and discover the source of illicit activities using
logs. Logs provide clues to investigate. The basic problem with logs is that they can be altered
easily. An attacker can easily insert false entries into log files.
An investigator must be able to prove in court that logging software is correct. Computer records
are not normally admissible as evidence; they must meet certain criteria to be admitted at all. The
prosecution must present appropriate testimony to show that logs are accurate, reliable, and fully
intact. A witness must authenticate computer records presented as evidence.
Legality of Using Logs
The following are some of the legal issues involved with creating and using logs that organizations
and investigators must keep in mind:
• Logs must be created reasonably contemporaneously with the event under investigation.
• Log files cannot be tampered with.
• Someone with knowledge of the event must record the information. In this case, a program is
doing the recording; the record therefore reflects the a priori knowledge of the programmer and
system administrator.
• Logs must be kept as a regular business practice.
• Random compilations of data are not admissible.
• Logs instituted after an incident has commenced do not qualify under the business records
exception; they do not reflect the customary practice of an organization.
• If an organization starts keeping regular logs now, it will be able to use the logs as evidence later.
• A custodian or other qualified witness must testify to the accuracy and integrity of the logs. This
process is known as authentication. The custodian need not be the programmer who wrote the
logging software; however, he or she must be able to offer testimony on what sort of system is
used, where the relevant software came from, and how and when the records are produced.
• A custodian or other qualified witness must also offer testimony as to the reliability and integrity
of the hardware and software platform used, including the logging software
• A record of failures or of security breaches on the machine creating the logs will tend to impeach
the evidence.
• If an investigator claims that a machine has been penetrated, log entries from after that point are
inherently suspect.
• In a civil lawsuit against alleged hackers, anything in an organization’s own records that would
tend to exculpate the defendants can be used against the organization.
• An organization’s own logging and monitoring software must be made available to the court so
that the defense has an opportunity to examine the credibility of the records. If an organization can
show that the relevant programs are trade secrets, the organization may be allowed to keep them
secret or to disclose them to the defense only under a confidentiality order.
• The original copies of any log files are preferred.
• A printout of a disk or tape record is considered to be an original copy, unless and until judges
and jurors are equipped computers that have USB or SCSI interfaces.
Examining Intrusion and Security Events
As discussed earlier, the inspection of log files can reveal an intrusion or attack on a system.
Therefore, monitoring for intrusion and security breach events is necessary to track down attackers.
Examining intrusion and security events includes both passive and active tasks. A detection of an
intrusion that occurs after an attack has taken place is called a post-attack detection or passive
intrusion detection. In these cases, the inspection of log files is the only medium that can be used
to evaluate and rebuild the attack techniques. Passive intrusion detection techniques usually
involve a manual review of event logs and application logs. An investigator can inspect and
analyze event log data to detect attack patterns.
On the other hand, there are many attack attempts that can be detected as soon as the attack takes
place.
This type of detection is known as active intrusion detection. Using this method, an administrator
or investigator follows the footsteps of the attacker and looks for known attack patterns or
commands, and blocks the execution of those commands.
Intrusion detection is the process of tracking unauthorized activity using techniques such as
inspecting user actions, security logs, or audit data. There are various types of intrusions, including
unauthorized access to files and systems, worms, Trojans, computer viruses, buffer overflow
attacks, application redirection, and identity and data spoofing. Intrusion attacks can also appear
in the form of denial of service, and DNS, e-mail, content, or data corruption. Intrusions can result
in a change of user and file security rights, installation of Trojan files, and improper data access.
Administrators use many different intrusion detection techniques, including evaluation of system
logs and settings, and deploying firewalls, antivirus software, and specialized intrusion detection
systems. Administrators should investigate any unauthorized or malicious entry into a network or
host.
Using Multiple Logs as Evidence
Recording the same information in two different devices makes the evidence stronger. Logs from
several devices collectively support each other. Firewall logs, IDS logs, and TCPDump output can
contain evidence of an Internet user connecting to a specific server at a given time.
Maintaining Credible IIS Log Files
Many network administrators have faced serious Web server attacks that have become legal issues.
Web attacks are generally traced using IIS logs. Investigators must ask themselves certain
questions before presenting IIS logs in court, including:
• What would happen if the credibility of the IIS logs was challenged in court?
• What if the defense claims the logs are not reliable enough to be admissible as evidence?
An investigator must secure the evidence and ensure that it is accurate, authentic, and accessible.
In order to prove that the log files are valid, the investigator needs to present them as acceptable
and dependable by providing convincing arguments, which makes them valid evidence.
Log File Accuracy
The accuracy of IIS log files determines their credibility. Accuracy here means that the log files
presented before the court of law represent the actual outcome of the activities related to the IIS
server being investigated. Any modification to the logs causes the validity of the entire log file
being presented to be suspect.
Logging Everything
In order to ensure that a log file is accurate, a network administrator must log everything. Certain
fields in IIS log files might seem to be less significant, but every field can make a major
contribution as evidence. Therefore, network administrators should configure their IIS server logs
to record every field available.
IIS logs must record information about Web users so that the logs provide clues about whether an
attack came from a logged-in user or from another system.
Consider a defendant who claims a hacker had attacked his system and installed a back-door proxy
server on his computer. The attacker then used the back-door proxy to attack other systems. In
such a case, how does an investigator prove that the traffic came from a specific user’s Web
browser or that it was a proxied attack from someone else?
Extended Logging in IIS Server
Limited logging is set globally by default, so any new Web sites created have the same limited
logging. An administrator can change the configuration of an IIS server to use extended logging.
The following steps explain how to enable extended logging for an IIS Web/FTP server and change
the location of log files:
1. Run the Internet Services Manager.
2. Select the properties on the Web/FTP server.
3. Select the Web site or FTP site tab.
4. Check the Enable Logging check box.
5. Select W3C Extended Log File Format from the drop-down list.
6. Go to Properties.
7. Click the Extended Properties tab, and set the following properties accordingly:
• Client IP address
• User name
• Method
• URI stem
• HTTP status
• Win32 status
• User agent
• Server IP address
• Server port
8. Select Daily for New Log Time Period below the general Properties tab.
9. Select Use local time for file naming and overturn.
10. Change the log file directory to the location of logs.
11. Ensure that the NTFS security settings have the following settings:
• Administrators - Full Control
• System - Full Control
Keeping Time
With the Windows time service, a network administrator can synchronize IIS servers by
connecting them to an external time source.
Using a domain makes the time service synchronous to the domain controller. A network
administrator can synchronize a standalone server to an external time source by setting certain
registry entries:
Key: HKLM\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\
Setting: Type
Type: REG_SZ
Value: NTP
Key: HKLM\SYSTEM\CurrentControlSet\Services\W32Time\Parameters\
Setting: NtpServer
Type: REG_SZ
Value: ntp.xsecurity.com
Avoiding Missing Logs
When an IIS server is offline or powered off, log files are not created. When a log file is missing,
it is difficult to know if the server was actually offline or powered off, or if the log file was deleted.
To combat this problem, an administrator can schedule a few hits to the server using a scheduling
tool. The administrator can keep a log of the outcomes of these hits to determine when the server
was active. If the record of hits shows that the server was online and active at the time that log file
data is missing, the administrator knows that the missing log file might have been deleted.
Log File Authenticity
An investigator can prove that log files are authentic if he or she can prove that the files have not
been altered since they were originally recorded.
IIS log files are simple text files that are easy to alter. The date and time stamps on these files are
also easy to modify. Hence, they cannot be considered authentic in their default state. If a server
has been compromised, the investigator should move the logs off the server. The logs should be
moved to a master server and then moved offline to secondary storage media such as a tape or CDROM.
Working with Copies
As with all forensic investigation, an investigator should never work with the original files when
analysing log files. The investigator should create copies before performing any postprocessing or
log file analysis.
If the original files are not altered, the investigator can more easily prove that they are authentic
and are in their original form. When using log files as evidence in court, an investigator is required
to present the original files in their original form.
Access Control
In order to prove the credibility of logs, an investigator or network administrator needs to ensure
that any access to those files is audited. The investigator or administrator can use NTFS
permissions to secure and audit the log files. IIS needs to be able to write to log files when the logs
are open, but no one else should have access to write to these files. Once a log file is closed, no
one should have access to modify the contents of the file.
Chain of Custody
As with all forensic evidence, the chain of custody must be maintained for log files. As long as the
chain of custody is maintained, an investigator can prove that the log file has not been altered or
modified since its capture.
When an investigator or network administrator moves log files from a server, and after that to an
offline device, he or she should keep track of where the log file went and what other devices it
passed through. This can be done with either technical or nontechnical methods, such as MD5
authentication.
IIS Centralized Binary Logging
Centralized binary logging is a process in which many Web sites write binary and unformatted log
data to a single log file. An administrator needs to use a parsing tool to view and analyze the data.
The files have the extension. ibl, which stands for Internet binary log. It is a server property, so all
Web sites on that server write log data to the central log file. It decreases the amount of system
resources that are consumed during logging, therefore increasing performance and scalability. The
following are the fields that are included in the centralized binary log file format:
• Date
• Time
• Client IP address
• User name
• Site ID
• Server name
• Server IP address
• Server port
• Method
• URI stem
• URI query
• Protocol status
• Windows status
• Bytes sent
• Bytes received
• Time taken
• Protocol version
• Protocol substatus
ODBC Logging
ODBC logging records a set of data fields in an ODBC-compliant database like Microsoft Access
or Microsoft SQL Server. The administrator sets up and specifies the database to receive the data
and log files. When ODBC logging is enabled, IIS disables the HTTP.sys kernel-mode cache. An
administrator must be aware that implementing ODBC logging degrades server performance.
Some of the information that is logged includes the IP address of the user, user name, date, time,
HTTP status code, bytes received, bytes sent, action carried out, and target file.
Tool: IISLogger
IISLogger provides additional functionality on top of standard IIS logging. It produces additional
log data and sends it using syslog. It even logs data concerning aborted Web requests that were
not completely processed by IIS. IISLogger is an ISAPI filter that is packaged as a DLL embedded
in the IIS environment. It starts automatically with IIS. When IIS triggers an ISAPI filter
notification, IISLogger prepares header information and logs this information to syslog in a certain
format. This occurs each time, for every notification IISLogger is configured to handle.
The following are some of the features of IISLogger:
• It generates additional log information beyond what is provided by IIS.
• It recognizes hacker attacks.
• It forwards IIS log data to syslog.
• It provides a GUI for configuration purposes.
Figure 1-1 shows a screenshot from IISLogger.
Importance of Audit Logs
The following are some of the reasons audit logs are important:
• Accountability: Log data identifies the accounts that are associated with certain events. This data
highlights where training and disciplinary actions are needed.
• Reconstruction: Investigators review log data in order of time to determine what happened before
and during an event.
• Intrusion detection: Investigators review log data to identify unauthorized or unusual events.
These events include failed login attempts, login attempts outside the designated schedules, locked
accounts, port sweeps, network activity levels, memory utilization, and key file or data access.
• Problem detection: Investigators and network administrators use log data to identify security
events and problems that need to be addressed.
Syslog
Syslog is a combined audit mechanism used by the Linux operating system. It permits both local
and remote log collection. Syslog allows system administrators to collect and distribute audit data
with a single point of management. Syslog is controlled on a per-machine basis with the file
/etc/syslog.conf. This configuration file consists of multiple lines like the following:
mail.info /var/log/maillog
The format of configuration lines is:
facility.level action
The Tab key is used to define white space between the selector on the left side of the line and the
action on the right side.
The facility is the operating system component or application that generates a log message, and
the level is the severity of the message that has been generated. The action gives the definition of
what is done with the message that matches the facility and level. The system administrator can
customize messages based on which part of the system is generating data and the severity of the
data using the facility and level combination.
The primary advantage of syslog is that all reported messages are collected in a message file. To
log all messages to a file, the administrator must replace the selector and action fields with the
wildcard (*).
Logging priorities can be enabled by configuring /var/log/syslog. All authorized messages can be
logged with priorities such as emerg (highest), alert, crit, err, warning, notice, info, or debug
(lowest). Events such as bad login attempts and the user’s last login date are also recorded. If an
attacker logs into a Linux server as root using the secure shell service and a guessed password, the
attacker’s login information is saved in the syslog file.
It is possible for an attacker to delete or modify the /var/log/syslog message file, wiping out the
evidence. To avoid this problem, an administrator should set up remote logging.
Remote Logging
Centralized log collection makes simpler both day-to-day maintenance and incident response, as
it causes the logs from multiple machines to be collected in one place. There are numerous
advantages of a centralized log collection site, such as more effective auditing, secure log storage,
easier log backups, and an increased chance for analysis across multiple platforms. Secure and
uniform log storage might be helpful in case an attacker is prosecuted based on log evidence. In
such cases, thorough documentation of log handling procedures might be required.
Log replication may also be used to audit logs. Log replication copies the audit data to multiple
remote logging hosts in order to force an attacker to break into all, or most, of the remote-logging
hosts in order to wipe out evidence of the original intrusion.
Preparing the Server for Remote Logging
The central logging server should be set aside to perform only logging tasks. The server should be
kept in a secure location behind the firewall. The administrator should make sure that no
unnecessary services are running on the server. Also, the administrator should delete any
unnecessary user accounts. The logging server should be as stripped down as possible so that the
administrator can feel confident that the server is secure.
Configuring Remote Logging
sThe administrator must run syslogd with the -r option on the server that is to act as the central
logging server. This allows the server to receive messages from remote hosts via UDP. There are
three files that must be changed:
• In the file /etc/rc.d/init.d/syslog, a line reads:
SYSLOGD_OPTIONS=“-m 0”
The administrator must add the -r flag to the options being passed to syslog:
SYSLOGD_OPTIONS=“-m 0 -r”
The -r option opens the syslog daemon port 514 and makes syslog listen for incoming log
information.
• In the file /etc/sysconfig/syslog, there is a line similar to the above line. The administrator needs
to add the -r flag to this line also.
• The administrator needs to integrate the syslog daemon service into the
/etc/services files. Syslog 514/udp
The administrator must run the following command after altering the three files:
/sbin/service syslog restart
A reference should appear in the var/log/messages file indicating that the remote syslog server is
running.
The syslog server can be added to the /etc/syslogd.conf file in the client, which can preserve an
audit trail even if a cracker does an rm -rf.
Other servers can be configured to log their messages to the remote server by modifying the action
field in the syslog.conf as:
Auth.* @myhost
Tool: Syslog-ng
Syslog-ng is a flexible and scalable audit-processing tool. It offers a centralized and securely stored
log for all the devices on a network.
The following are some of the features of Syslog-ng:
• It guarantees the availability of logs.
• It is compatible with a wide variety of platforms.
• It is used in heavily firewalled environments.
• It offers proven robustness.
• It allows a user to manage audit trails flexibly.
• It has customizable data mining and analysis capabilities.
• It allows a user to filter based on message content.
8. Analysis of Laser and Inkjet Prints Using Spectroscopic
Methods for Forensic Identification of Questioned Documents
Lukáš Gál, Michaela Belovičová, Michal Oravec, Miroslava Palková, Michal Čeppan
Slovak University of Technology in Bratislava, Faculty of Chemical and Food
Technology, Institute of Polymer Materials, Department of Graphic Arts Technology and
Applied Photochemistry
Abstract:
The spectral properties in UV-VIS-NIR and IR regions of laser and inkjet prints were studied for
the purposes of forensic analysis of documents. The procedures of measurements and processing
of spectra of printed documents using fibre optics reflectance spectroscopy in UV-VIS and NIR
region, FTIR-ATR with diamond/ZnSe and germanium crystals were optimized. It was found
that the shapes of spectra of various black laser jet prints and inkjet prints generally differ in the
spectral regions UV-VIS-NIR and IR. However, the resolution of individual spectra, and hence
of individual printers, based on the simple visual comparison is not reliable enough. However,
using of these spectra for identification of individual printers should be enhanced by
computational chemometric methods
Keywords: document, spectroscopy, laser, inkjet
Introduction
In the realm of interesting facts frequently cited, a major legal step with strong implications for
Questioned Documents examination was taken in 1562 when the English Parliament decreed
forgery as a statutory offense. The damages incurred by forgery were considered so severe that in 1634
it was made a capital offense, which it remained for more than two hundred years.
Thus, the crime of forgery was established in the sixteenth century, and in 1684 it was ruled that
“comparison of hands is without doubt good evidence in cases of treason” (R v. Hayes, 10 State
Tr. 307).
The protecting copyright and verifying authenticity is very important in each aspects of our life.
The documents
like agreements, wills, and ownership of properties, judicial papers, and
educational certificates or other commonly used documents in economy and society are used every
day. Document is any material that contains printed information conveying some meaning or a
message. With the growing of new technologies creation documents increased too. However, the
exchange principle, now called Contact Traces, first articulated by Edmond
Locard in 1910 give us an advantage in this direction: “One cannot come into contact with an
environment without changing it in some way”
The graphic documents represent a complex system of underlay and material structure of own
graphical information (inks, toners, colours…) and substrate (usually paper) with mutual complex
interactions of components, which are represented on document properties.
Observation of authenticity and other characteristics of graphical documents are approached to
from several directions. A basic process is material analysis of documents, i.e. determination of
material characteristics of documents components, underlays and inks and layers structure in the
case of multi-layered documents, which can help to investigate and clarify the facts. In practice
throughout range of physical and chemical methods is used to the study and analysis of document
composition and to state of graphical documents too. Currently used analytical techniques for
investigation of inks and writing means pastes (TLC, HPTLC, GC-MS, and
HPLC) [2-4] requires pre-treatment of a sample – separation of analysed material from carrier (mostly
paper). This approach brings several disadvantages – risk of changes of chemical structure during
separation of dyes, poor solubility of some components of writing materials in extraction reagent,
irreversible damage of integrity of the studied material.
Due to the character of studied objects, the methods, which allow the greatest extend of nondestructive and micro-destructive investigation have special importance, among which molecular
spectroscopy methods and other optical methods (colorimetric to objective dye description,
photography and micro-photography in different spectral areas, imaging photometry and image
analysis) are preferably used [5-9]. Various applications of spectroscopic methods in analysis of
inks [5, 8, 10], dating of inks of ball tip pens [11, 12], analysis of paper [11,
12], tonners for copiers and laser printers [13, 14], as well as forensic analysis of other materials
[15] were described in the literature.
The aim of this work was study of spectral properties of laser and inkjet prints and assessment of
possibilities of non-destructive methods of molecular spectroscopy to identify laser toners and inkjet
inks.
Experimental methods
Samples of prints were preparing as follows:
A model target which consists of solid surfaces, lines corresponding to the thickness of the font
size 8, 10 and 12 points and characters of size 8, 10 and 12 points was designed.
Subsequently a set of prints from various types of inkjet and laser printers, using the same type of
office paper for all types of printers, with standard print quality settings were prepared.
For inkjet prints only the printing in black was selected.
For laser printer’s samples with black printing settings were printed and for color laser printers the
samples with CMYK printing settings were printed, too… Number of samples analysed: 15 prints
of different inks for inkjet prints and 20 prints of different toners for laser prints.
Methodology of work and examination methods
UV-VIS-NIR spectroscopy
The reflectance spectra of inkjet prints in UV-VIS-NIR region area were measured on fibre optic
reflectance spectroscopic system Ocean Optics, which consisting of HR 4000 spectrometer, UVVIS-NIR light source DH-2000-BAL and of standard adapter for measurement of reflection spectra
with geometry 45/45. For each measurement, the detector was calibrated on the blank paper near
to the inked area. In this way, influences of the paper were largely excluded. A measured
reflectance spectra R(l) were converted to optical density spectra D(l) (1). D(λ) = log 1/R(λ) (1)
Directly obtained spectra contain a lot of points and are noisy and almost unusable without
processing.
The original spectra were interpolated in the wavelength range 220-1050 nm with the step 2 nm. Then
the spectra were smoothed, without significant influence on the shape using Savitzky-Golay type of
filter with filtration parameters 15 points and second polynomial order. Finally, the spectra of optical
densities were normalized to interval 0-1. This type of shape enhancement is more suitable for
analysis. Thus obtained spectra are appropriate for further analysis.
FTIR-ATR spectroscopy
The reflectance spectra of laser prints in the Infrared region (IR) were measured on Excalibur
FTS 3000MX (Digilab, USA) spectrometer with ATR adapter with diamond crystal. The obtained
spectra of laser prints were processed and normalized in the same way as in the case of UV-VISNIR spectra of inkjet print above.
The reflectance spectra of inkjet prints in IR region are practically useless, because absorption signals
of inks penetrated deeply into the paper are overlapped by strong absorption signal of cellulose.
Results and discussion
Laser prints
Spectra are included into individual groups according to the different absorption in the wavelength
range 1500-600 cm.
FTIR spectra of individual laser prints are generally different. Simple visual distinction is not
unambiguous and for assignment of spectra to individual prints will the numerical, chemometric
have to be used.
FTIR spectra of 2 groups of laser prints
IR spectra of diverse laser toners are different in various extents. The differences between the spectra
of laser prints of different producers are more significant.
Comparison of FTIR spectra of laser prints of different producers
Inkjet prints
Comparison of normalised optical densities spectra (Figure 6) from 4 different types of prints
shows, that the shapes of spectra of inks of Epson printers significantly differ from the shapes of
spectra of Canon inks. The differences are most noticeable in the spectral range 600-1050 nm.
Based on these differences of shapes the possibility of resolution of individual marks of inks can
be presumed.
Spectra from 4 different types of prints
The spectra of two black inks of Epson used in different types of inkjet printers are on the
There is significant bathochromic shift into the near infrared region in the spectrum of ink from the
printer Epson PM-D800, so spectra of these two inks can be resolved.
Spectra of black inks of different Epson inkjet printers and Spectra of black inks of different
Canon inkjet printers.
The spectra of two black inks Canon used in different types of Canon inkjet printers are on the.
The shapes of the spectra differ mainly in the spectral range 550-1050 nm. So, the resolution of
these spectra and hence inks is possible.
9. Investigating Trademark and Copyright Infringement
Trademark Investigations
Brand owners invest a lot of time and resources in trademarks, and they deserve to have
competent assistance to make sure that they aren’t wasting any capital. With the trademark
investigation services of Kessler International at your disposal, you’ll be able to determine if
someone else already holds the trademark you’re interested in or if it’s been abandoned. Our
trademark in-use investigation protocols are also designed to inform you if anyone is profiting
from valuable intellectual property without the permission of the owners.
Forewarned Is Forearmed
Before seriously committing to deploying any trademark, prominent organizations or their
attorneys enlist us so that they can discover if the coast is legally clear. It’s far better to allow us
to provide you with thorough, reliable info beforehand rather than finding yourself caught up in
litigation later on. We’ll let you know:

If anyone is currently using a similar mark

How long a given mark has been in existence

How extensively a trademark has been used

The geographical distribution of a trademark
Secure Your Rights
Whether through ignorance or malice, there are many people and organizations who might
unfairly employ the fruits of other people’s labor toward their own ends. We’ll endeavour to
prevent this from happening with our intellectual property investigation efforts. If we uncover a
case of someone illicitly profiting from the property of you or your clients, we’ll take steps to
help you resolve the issue either amicably or through the legal system.
About Our Intellectual Property Investigations
In order to conduct a trademark investigation, we employ sophisticated tools, like our proprietary
Web.Sweep and News.Sweep programs. They enable us to do cost-effective trademark searches
across the internet as part of a comprehensive trademark in use investigation. We aren’t
restricted to searching only in certain geographical areas or markets; we operate around the
world. We supplement these measures with photographic evidence and undercover investigations
whenever necessary. After gathering all the information we need, we’ll compile a detailed report
of our findings. Our professional researchers and investigators have a wealth of knowledge and
experience, so they’re well suited to the task of creating documents that lay out exactly what you
wish to ascertain. We pass all our reports along to members of our senior staff, who review them
for accuracy before handing them over to our clients.
Trusted by Large Enterprises
We’ve been consulted by Fortune 500 companies, who appreciate the fact that we can safeguard
their brands with a meticulous trademark in use investigation. We’ve been serving our clients,
including law firms and corporate counsel, for more than two decades. If you believe that it’s
time for a diligent trademark investigation, then Kessler International is here to lend you a hand.
Contact us today to learn more about how we can efficiently defend intellectual property rights
and prevent poor investments.
1. Case Studies
Our client, a proprietor in the wines and spirits industry retained the services of Kessler
International to conduct a trademark investigation with respect to an upstart liquor company
producing a beverage found to be similarly named, and using a similar logo as that of our client.
Kessler conducted research, contacted key individuals, and provided the necessary findings to
our client. In turn, it was requested that Kessler provide a supplemental service in light of our
investigation’s results. As such, Kessler monitored the growth and expansion of the particular
trademarked beverage, and consistently forwarded the results to our client to proceed in a
manner they saw fit. Recently, Kessler was retained to conduct a trademark investigation on
behalf of a high-end law firm. In this particular case, the law firm had been contacted by a hotel
group with a growing concern that a competing entity constructing hotels would feature a
currently in-use trademark. Our investigators conducted Internet and social media research to
acquire any and all intelligence available on the competitor. An undercover investigator was also
sent to the offices of the competitor to confirm or deny their very existence. Upon providing this
information to our client, it was requested that Kessler perform an on-site visit to confirm the
extent of trademark infringement. Kessler then sent an investigator to visit the construction sites
to obtain additional information regarding any and all infringement of our client’s mark. In one
location the currently in-use trademark was found represented within a fully-operational hotel,
while the second location was found to still be under construction. Our in-depth findings were
forwarded to our client. Our written report was further supplemented by top quality photographs
of our on-site findings.
2. Trademark & Copyright Infringement Investigation
The idea for your new corporate logo is decidedly brilliant. So you pursue the next step and hire
an artist, who for a hefty price will immortalize your firm forever. Right? Wrong! Everyday,
many companies just like yours make the mistake of committing large amounts of time and
money to a trademark only to find out that another firm across the country is using the exact
same image to brand their company name. How can you be sure this doesn’t happen to you?
Make sure you get trademark clearance from ADSPL. We specialize in the areas of trademark
search and trademark investigation.
3. Trademark Investigation – Secure Your Brand Identity
What if you don’t? What if you go ahead with the trademark generation without a trademark inuse search or intellectual property investigation, and later find it to be identical to an already
existing trademark? The ramifications could be devastating, and may undermine your firm’s
financial stability. Not only would you waste large amounts of capital unnecessarily. You’d also
be legally liable to pay damages to the original trademark owner, leading to financial losses that
could devastate your firm’s capital structure.
4. Trademark In Use Investigations
Should it be determined that certain trademarked materials are currently in-use, that does not
mean ADSPL’S investigation stops. ADSPL has developed a number of strategies, including
undercover product acquisition to verify usage or non-usage of a particular trademarked item. In
addition, ADSPL’S established the Trademark Acquisition Division to function as a liaison
should you so decide to purchase an existing trademark. A simple phone call to ADSPL and a
meeting with a member of our expert trademark investigation research team in our Trademark
Acquisition Division will lay the foundation for a complete international market survey of your
proposed logo or trademark. We’ll let you know if your trademark is in use, and give you the
name and contact information of the trademark owner. We’ve found that just because a
trademark is owned doesn’t necessarily mean it’s being used. We’ll help you ascertain the usage
or non-usage, and act as a liaison in if you decide to purchase an existing trademark.
5. Our Trademark Investigations are Worldwide
In performing trademark investigations, ADSPL is able to determine the first date of a
trademark’s use; obtain documents from governmental agencies regarding any and all filings of
the trademarked material; in addition to providing information regarding the supply and
distribution of a given trademarked product. As ADSPL has operations worldwide, the trademark
investigations we conduct are not limited to a specific geographical region. As our fee structure
is considered quite competitive, the charges associated with trademark investigations usually
remain the same regardless of our client’s location.
6. Protect Your Business from Trademark Infringement
According to recent statistics, the epidemic of trademark infringements is growing at an
exponential rate. Through advancements in technology and the explosive growth of the Internet,
access to your prized intellectual property is exposed, more than ever before, to unscrupulous
individuals looking to cash in on your good name and reputation. These infringements, if not
detected early, lead to dilution and fair market use of your properties without compensation.
Intellectual property and trademark infringement suits can be difficult, exhausting your time and
resources.
7. Technology of Trademark Investigations
In an effort combat trademark infringement, ADSPL’S researchers and investigators have an
array of tools at their disposal to make sure that any property that you decide to trademark is safe
from infringement. At ADSPL, we’re also able to conduct full national and international market
surveys of your existing trademarks. Our research has no bounds. All markets and industries can
be investigated for illegal use of your intellectual property. ADSPL has been consulted by many
number of prominent industry leaders to not only confirm if a trademark is available, but that inuse trademarks are being appropriately represented. Each and every of ADSPL trademark
investigations are compiled into a report written by a competent team of researchers and
investigators with editorial experience. Each report is reviewed by these individuals, and then
subsequently reviewed for accuracy by members of the Senior Staff to ensure that our client’s
receive only the most accurate results. Let the professional investigators and researchers at
ADSPL handle your trademark investigations. We will protect you from infringements, and
make sure you’re not found guilty of the same yourself, with efficient and cost effective
solutions. When you want total protection for your prized intellectual property, call ADSPL
today.
10. Investigating Child Pornography Cases
At the Neal Davis Law Firm, we handle a number of child pornography cases, and we’re often
asked why it takes so long for the Government to conduct a computer forensic investigation
either before or after a suspect’s arrest. We’ve seen a general pattern of investigation in these
cases. It begins with law enforcement suspecting someone of possessing child pornography,
usually because law enforcement has seen the suspect online upload to or download from a
known child porn site. Law enforcement obtains a search warrant, goes to the suspect’s home,
and seizes any items—e.g., hard drives, cell phones, or computers—that could store digital
media, including child porn. Law enforcement typically tries to interview the suspect and then
decides whether to make an arrest. If the suspect is not arrested at that time, and child porn is
discovered after forensic evaluation of their digital media, then the suspect will be arrested later,
usually within 6 to 12 months. Regardless of whether the suspect is arrested immediately or
later, law enforcement submits the items suspected to contain pornography to a computer
forensics unit to be analyzed. Because of the backlog of evidence waiting to be analyzed—child
porn cases are much more prevalent than the public believes—this process can take several
months. We’re currently seeing at least a six-month wait, sometimes up to a year, for forensic
analysis to occur.
HOW THE GOVERNMENT INVESTIGATES COMPUTER CRIMES
Child pornography images and videos have a unique “hash” number assigned to them. This is a
computer code that is unique to each image and video, sort of like the “Bates stamp” used to
number documents submitted as evidence. The hash codes for images and videos are typically
compared to a Government database of known child pornography victims. If the hash numbers
match, then the Government can tie a specific pornographic image or video with a known victim.
For example, suppose a particular image has a hash number of I23423594985043. The
Government runs this number through a database, matches it with a 12-year-old victim named
“Alexa” who was photographed in the Ukraine in the mid-1990s, and whom Ukrainian police
have already confirmed was a minor at the time. If the hash numbers do not match but it appears
there is a child involved, then the Government will turn to the question of whether the suspect
actually made the pornography. The Government will look for anything in the image or video—
location, prescription bottles, or anything else—that would confirm whether the suspect made the
child pornography.
IMPORTANCE OF HIRING A COMPUTER CRIME LEGAL DEFENDER
It is imperative that a person suspected or charged with child pornography hire the right attorney.
All kinds of defensive issues can arise in computer searches, from whether the initial search
warrant was legal and based on probable cause, to whether the computer forensics are valid.
Contact expert computer crime defense attorneys at the Neal Davis Law Firm. Our Houston
criminal defense law firm specializes in charges involving computer crimes, including child
pornography and sex offenses. We can get you real help, right away
What’s Involved in the Investigation Stage of a Child Pornography Case?
If you are innocent, being wrongly accused of possessing, making or distributing indecent
images of children can be an extremely traumatic scenario. Whilst child pornography as a sexual
offence breaks down into the above three charges, it’s vital that you understand what’s involved
during an investigation of this challenging nature. Our unique advice and assistance will allow
you to determine the best course of action to take. This insight may lead to an understanding of
the impact that these allegations can have on your own life, as well as the people around you.
Our experts classify the latter as “collateral damage.”
How Long Do Child Pornography Investigation Cases Last?
The fight against any allegation should begin as soon as you believe you have been accused of
breaking the law by committing a sexual offence. With the possibility of imprisonment, impact
on reputation, Sexual Offences Prevention Order and having your name added to the Sexual
Offender Register, cases can be both complex and ongoing. In some instances, it may take 18
months for a case to reach a conclusion. This might be a decision to prosecute or no further
action (NFA). However, each case will vary depending on a number of factors, including the
severity of the allegations made and legal defences. In Your Defence are specialist lawyers, daily
advising suspects as to the best option in all the prevailing circumstances.
The UK Home Secretary and Conservative politician, Theresa May, has outlined plans to amend
existing procedures, so that sexual offence cases are resolved within a shorter timeframe. A sixmonth limit has been suggested by politicians, government agencies and our own lawyers. At
present however, the timespan of all individual cases varies across the 43 police forces of
England and Wales. Cases are supposed to be resolved with ‘due expedition.’ The latest position
adopted by the Home Secretary is that there should be a maximum of six months on bail, but it is
unclear as to whether this will include the modern approach of being ‘under investigation’ after
an initial ‘voluntary’ interview under caution.
Child Pornography - The Investigation Process
During the investigation stage, the police should require evidence to back up any claims that you
are involved with the possession, making or distribution of indecent images of children.
To obtain this, they can apply to a magistrate for a warrant to search premises. This is nearly
always granted and you may be awoken with a dawn raid on your home or business address.
Whilst executing their search, the police would have the power to seize computers, hard drives,
tablets, mobile phones and other devices which they believe may contain evidence of child
pornography or extreme pornography.
The examination of this equipment can take a long time and the wait for an outcome during this
process can be excruciating. Some days, individuals on bail or under suspicion may be depressed
by the mere fact that the investigation is hanging over them like a ‘Sword of Damacles.’Please
be alert to the possibility of the police making initial contact with you with the intent of
arranging an ‘informal or voluntary chat’. This usually results in a formal interview under
caution at a police station or custody centre if you decide to accept the request. It is crucial that a
suspect contacts a specialist law company, such as in Your Defence Ltd, as soon as there is the
merest hint or notification of police interest.
Negative Impact and Damaging Implications
Whist any allegations relating to child pornography can be extremely stressful, there are a vast
number of implications associated with this type of crime. Besides the threat of imprisonment,
and registration as a sex offender, a sexual offences case can result in serious collateral damage
and social stigma. Alienation from family, friends and business colleagues is commonplace.
Often the only support is the extremely confidential service provided by In Your Defence Ltd,
Solicitors. The process of an investigation may affect your whole life, making it extremely
difficult to cope, maintain and secure future employment. If mentioned in the media or the
details released, there may be a reaction from the local community.
Other damaging implications to you and those around you as a result of any allegations can
include:
• Feelings of depression or suicide
• Pressure on close friends and family
• Social services and schools being alerted
• Restrictions on operating your own business
• Child contact issues • Strain on support services and the NHS
• Informal conditions being set by the authorities
• Children being interviewed for clues/evidence
• Authorities instructing you to move out of the family home
The strain and pressure you can experience from this situation can last for a number of months or
years, even once a case is concluded and you try to rebuild your life.
REFERENCE
1. http://httpd.apache.org/docs/2.2/mod/mod_dumpio.html
2. http://httpd.apache.org/docs/2.2/logs.html
3. African Network Information Centre (AfriNIC) for Africa, http://www.afrinic.net/
4. American Registry for Internet Numbers (ARIN) for the United States, Canada,
several parts of the Caribbean region, and Antarctica, https://www.arin.net/
5. Asia-Pacific Network Information Centre (APNIC) for Asia, Australia, New
Zealand, and neighboring countries, http://www.apnic.net/
6. Latin America and Caribbean Network Information Centre (LACNIC) for Latin
America and parts of the Caribbean
region, http://www.lacnic.net/en/web/lacnic/inicio
7. Réseaux IP Européens Network Coordination Centre (RIPE NCC) for Europe,
Russia, http://http://www.ripe.net/
8. http://www.dnsstuff.com/tools/tools):
9. www.maxmind.com
10. https://code.google.com/p/apache-scalp/
11. https://github.com/PHPIDS/PHPIDS/blob/master/lib/IDS/default_filter.xml