Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Step-by-Step Guide to Getting Started with Microsoft Windows Server Update Services on Windows Small Business Server 2003 Microsoft Corporation Published: December 2005 (Version 2) Author: Tim Elhajj Abstract Microsoft® Windows Server™ Update Services (WSUS) provides a comprehensive solution for managing updates within your network. This document tells you how to deploy WSUS on your network, including installing WSUS on computers running Windows® SBS 2003, configuring WSUS to obtain updates, configuring client computers to install updates, and approving and distributing updates. For the most up-to-date product documentation for Windows SBS 2003, see the Microsoft Web site at http://go.microsoft.com/fwlink/?LinkId=33326. The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. This White Paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. © 2005 Microsoft Corporation. All rights reserved. Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are property of their respective owners. Contents Step-by-Step Guide to Getting Started with Windows Server Update Services on Windows Small Business Server 2003 ............................................................................ 7 Step 1: Review WSUS Installation Requirements for Windows SBS ................................. 7 Hardware Requirements .................................................................................................. 8 Software Requirements ................................................................................................... 8 Disk Requirements and Recommendations .................................................................... 8 Automatic Updates Requirements ................................................................................... 9 Step 2: Install WSUS........................................................................................................... 9 Step 3: Configure Your Network Connection .................................................................... 15 Step 4: Synchronize Your Server ...................................................................................... 18 Step 5: Update and Configure the Client Computers ....................................................... 20 Step 6: Create Computer Groups ..................................................................................... 25 Step 7: Approve Updates .................................................................................................. 26 7 Step-by-Step Guide to Getting Started with Windows Server Update Services on Windows Small Business Server 2003 To download the most recent version of this documentation, visit the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=51211). Microsoft® Windows Server™ Update Services (WSUS) provides a comprehensive solution for managing updates within your network. This document offers step-by-step instructions for basic tasks involved with deploying WSUS on your network. Use this guide to perform the following tasks: Install WSUS on a computer running the Windows® Small Business Server (SBS) 2003 server software with Service Pack 1 (SP1). Configure WSUS to obtain updates from Microsoft. Configure client computers to install updates from WSUS. Approve, test, and distribute updates. Although WSUS includes several ways to accomplish each of these tasks, this guide offers only a single way to accomplish them. If alternatives are possible, a Note calls out these alternatives and points to more comprehensive instructions in either the "Deploying Microsoft Windows Server Update Services" white paper or the "Microsoft Windows Server Update Services Operations Guide" white paper. If you have already installed the predecessor to WSUS, Software Update Services (SUS), you should review this guide for any issues that are specific to Windows SBS, and you should also review the "Step-byStep Guide to Migrating from Software Update Services to Windows Server Update Services" to understand how to migrate to WSUS. The latest version of these documents is available on the Tech Center site for WSUS at http://go.microsoft.com/fwlink/?linkid=41171. Step 1: Review WSUS Installation Requirements for Windows SBS This guide tells you how to install WSUS on Windows SBS. 8 The following requirements are the minimum you need to install WSUS with the default options. You can find hardware and software requirements for installations that do not use the default options in “Deploying Microsoft Windows Server Update Services” at the Microsoft Web site (http://go.microsoft.com/fwlink/?linkid=41171). Hardware Requirements WSUS has higher hardware requirements than Windows SBS. If your computer has the minimum system requirements for Windows SBS, you might notice decreased performance in WSUS. The minimum and recommended hardware requirement for WSUS are as follows: 750 megahertz (MHz) processor (1 gigahertz (GHz) recommended) 512 megabytes (MB) RAM (1 gigabyte (GB) recommended) Software Requirements To install WSUS with the default options, you must first install Windows SBS 2003 with SP1. Windows SBS 2003 with SP1 includes all of the software that you need to install WSUS. For more information about the software requirements for WSUS, see “Deploying Microsoft Windows Server Update Services” at the Microsoft Web site (http://go.microsoft.com/fwlink/?linkid=41171). For more information about installing Windows SBS 2003 with SP1, see "Getting Started" at the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=51143). Disk Requirements and Recommendations To install WSUS, the file system of the server must meet the following requirements: Both the system partition and the partition on which you install WSUS must be formatted with the NTFS file system. The system partition must have a minimum of 1 GB free space. The volume where WSUS stores content must have a minimum of 6 GB free space; 30 GB is recommended. Note The 30 GB recommendation is only an estimate based on a number of variables, such as the number of updates released by Microsoft for any given product and how many products a WSUS administrator selects. Although 30 GB should work 9 for most customers, a worst-case scenario might require more than 30 GB of disk space. If you require more than 30 GB of disk space, see "Windows Server Update Services Operations Guide" for guidance on how to point WSUS to a larger disk. "Windows Server Update Services Operations Guide" is available on the Tech Center site for Windows Server Update Services at http://go.microsoft.com/fwlink/?linkid=41171. The volume where WSUS Setup installs Windows SQL Server 2000 Desktop Engine (WMSDE) must have a minimum of 2 GB free space. Automatic Updates Requirements Automatic Updates is the client component of WSUS. Automatic Updates has no hardware requirements, other than the client computer must be connected to the network. You can use Automatic Updates with WSUS on computers that are running any of the following operating systems: Microsoft Windows 2000 Professional with Service Pack 3 (SP3), Service Pack 4 (SP4), or Service Pack 5 (SP5); Windows 2000 Server with SP3, SP4, or SP5; or Windows 2000 Advanced Server with SP3, SP4, or SP5. Microsoft Windows XP Professional, with or without Service Pack 1 or Service Pack 2. Microsoft Windows Server 2003, Standard Edition, Enterprise Edition, Datacenter Edition, or Web Edition, or any of these operating systems with SP1. Note For operating systems not listed above, you can try to download updates manually by going directly to the Download Center at http://go.microsoft.com/fwlink/?LinkId=51471. Step 2: Install WSUS After reviewing the installation requirements, you are ready to install WSUS. You must log on to the server you plan to install WSUS on by using an account that is a member of the local Administrators group. Only members of the local Administrators group can install WSUS. The following procedure uses the default WSUS installation options for Windows SBS 2003 with SP1. These options include using Windows SQL Server 2000 Desktop 10 Engine (WMSDE) as the WSUS database software, storing updates locally, and using the Internet Information Services (IIS) custom Web site on port 8530. You can find procedures for custom installation options, such as using different database software, in “Deploying Microsoft Windows Server Update Services” at the Microsoft Web site (http://go.microsoft.com/fwlink/?linkid=41171). Important If you plan to install WSUS on a server that has Windows Update Services Beta 1 or Beta 2 installed, you first need to uninstall the earlier version by using Add or Remove Programs in Control Panel. To download the WSUS installer to your server 1. On the computer running Windows SBS, create a folder named WSUSFiles on the local hard disk. 2. Read how to register to download the latest version of WSUSSetup.exe at the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=51144). 3. Answer all of the required questions on the Windows Server Update Services Registration Wizard Web page, and then click Continue. 4. When the file download security warning appears, click Save. 5. In the Save As dialog box, browse to the WSUSFiles folder, and then click Save. To prepare the WSUS database 1. Extract the WSUS Setup files. a. Click Start, click Run, and then type C:\WSUSFiles\WSUSSetup.exe /X, where C: is the letter of your local hard disk. b. When prompted for a location to extract the files, select the WSUSFiles folder. 2. Type the following command, where C: is the letter of your local hard disk, and then press ENTER: CD C:\WSUSFiles\wmsde 3. Type the following command with consideration to the points listed below, and then press ENTER: Sqlrun03.msi InstanceName=WSUS BlankSAPwd=1 Reboot=ReallySuppress DisableNetworkProtocols=1 DisableAgentStartup=1 DisableThrottle=1 11 If you want to specify the drive letter where the database instance will be located, you must add the DataDir="Path" argument to the command line, where Path is the path to the target directory in the file system. The command line implies that your WSUS database will have a blank password. However, during the actual installation of WSUS, a randomly generated password is set. You do not need to specify a password. The command line is not case sensitive. 4. Start the MSSQL$WSUS service. To do this, click Start, click Run, and then type Services.msc. Right-click MSSQL$WSUS, and then click Start. If the service is not listed, rerun the command in Step 3 of this procedure. To install WSUS 1. Click Start, click Run, and then type C:\WSUSFiles\WSUSSetup.exe, where C: is the letter of your local hard disk. 2. On the Welcome page of the wizard, click Next. 3. Review the license agreement carefully. To continue, you must accept the agreement. 4. On the Select Update Source page, you can specify where the client computers get updates. If you select the Store updates locally check box, updates are stored on the server and you can select a location in the file system to store updates. If you do not store updates locally, the client computers connect to Microsoft Update to get approved updates. Keep the default option to store updates locally, either choose a location to store updates or accept the default location, and then click Next. Select Update Source Page 12 5. On the Database Options page, keep the default options, and then click Next. Because you installed WMSDE in the previous procedure, changing the options on this page of the wizard has no effect. 6. On the Web Site Selection page, specify a Web site for WSUS to use. This page also lists two important URLs based on this selection: the URL to which you will point WSUS client computers to get updates, and the URL for the WSUS console where you can configure WSUS. Keep the default option and click Next. Web Site Selection page 13 7. On the Mirror Update Settings page, keep the default option and click Next. If you want to use multiple WSUS servers in a central management topology, see “Deploying Microsoft Windows Server Update Services” at the Microsoft Web site (http://go.microsoft.com/fwlink/?linkid=41171). Mirror Update Settings Page 14 8. On the Ready to Install Windows Server Update Services page, review the selections, and then click Next. Ready to Install Windows Server Update Services page 15 9. If the final page of the wizard confirms that WSUS installation was successfully completed, click Finish. Note After you install WSUS, you can delete the C:\WSUSFiles folder. However, do not delete the C:\WSUS folder, which is created when WSUS is installed. Step 3: Configure Your Network Connection After installing WSUS, you are ready to access the WSUS console in order to configure WSUS and get started. By default, WSUS is configured to use Microsoft Update as the 16 location for obtaining updates. If you have a proxy server on your network, use the WSUS console to configure WSUS to use the proxy server. If you have a firewall between WSUS and the Internet, you might need to configure the firewall to ensure that WSUS can obtain updates. If you are using Internet Security and Acceleration (ISA) Server and you have not created any additional rules beyond the default configuration, you do not have to configure your firewall. Note Although you must have Internet connectivity to download updates from Microsoft Update, WSUS offers you the ability to import updates on to networks that are not connected to the Internet. For more information, see “Deploying Microsoft Windows Server Update Services” at the Microsoft Web site (http://go.microsoft.com/fwlink/?linkid=41171). Step 3 contains the following procedures: Configure your firewall so that WSUS can obtain updates. Open the WSUS console. Configure proxy-server settings so that WSUS can obtain updates. To configure your firewall If you have a firewall between WSUS and the Internet—this could be a hardware firewall or ISA Server—you might need to configure the firewall to ensure that WSUS can obtain updates. To obtain updates from Microsoft Update, the WSUS server uses port 80 for the HTTP protocol and port 443 for the HTTPS protocol. The ports that WSUS uses to communicate with Microsoft Update are not configurable. Note If your organization does not allow ports 80 and 443 and protocols HTTP and HTTPS open to all addresses, then for more information about how to configure your firewall, see “Deploying Microsoft Windows Server Update Services” at the Microsoft Web site (http://go.microsoft.com/fwlink/?linkid=41171). To open the WSUS console On your server, click Start, point to All Programs, point to Administrative Tools, and then click Microsoft Windows Server Update Services. 17 Note You must be a member of either the WSUS Administrators or the local Administrators security groups on the server on which WSUS is installed in order to use the WSUS console. If you do not add http://WSUSWebSiteName to the list of sites in the Local Intranet zone in Internet Explorer in Windows SBS 2003, you might be prompted for credentials each time you open the WSUS console. If you change the port assignment in IIS after you install WSUS, you need to manually update the shortcut that is on the Start menu. You can also open the WSUS console from Internet Explorer on any server or computer on your network by entering the following URL: http://WSUSServerName:8530/WSUSAdmin To specify a proxy server 1. On the WSUS console toolbar, click Options, and then click Synchronization Options. 2. In the Proxy server box, select the Use a proxy server when synchronizing check box, and then type the proxy server name and port number in the corresponding boxes. If you are using ISA Server in its default configuration on your server, enter the name of the server and port 8080. Proxy server box 18 3. If you want to connect to the proxy server by using specific user credentials, select the Use user credentials to connect to the proxy server check box, and then type the user name, domain, and password of the user in the corresponding boxes. If you want to enable basic authentication for the user connecting to the proxy server, select the Allow basic authentication (password in clear text) check box. 4. Under Tasks, click Save settings, and then click OK in the confirmation dialog box. Step 4: Synchronize Your Server After you configure the network connection, you must synchronize the server. When you synchronize, the server contacts Microsoft Update and determines whether any new updates have been made available since the last time you synchronized. Because this is 19 the first time you are synchronizing, all available updates appear for you to approve for installation. Note This paper describes synchronization using some bandwidth optimizations during synchronization. For more information about further optimizing bandwidth, see “Deploying Microsoft Server Windows Update Services” at the Microsoft Web site (http://go.microsoft.com/fwlink/?linkid=41171). Consider changing the default language options for updates. By default, WSUS is configured to download all languages. You can change this to download updates only for the languages available on your network. To change language options 1. On the WSUS console toolbar, click Options, and then click Synchronization Options. 2. Under Update Files and Languages, click Advanced. 3. Select Download updates only in the selected languages, and then select only the languages of the computers available on your network. To synchronize your server with Microsoft Update 1. On the WSUS console toolbar, click Options, and then click Synchronization Options. 2. Under Tasks, click Synchronize now. Note The time required for synchronization depends on a number of things, including Internet connection speed and the number of products and update classifications selected. To automate future synchronizations 1. On the WSUS console toolbar, click Options, and then click Synchronization Options. 2. Under Schedule, click Synchronize daily at, and then select a time for daily synchronizations from the drop down list. 20 By default, WSUS offers Critical and Security Updates for all Windows products to add products like SQL Server, Exchange, or Office, you must first synchronize WSUS and then perform the following procedure. To modify the default list of products to update 1. On the WSUS console toolbar, click Options, and then click Synchronization Options. 2. Under Products, click Change. 3. Select Windows Small Business Server. Additionally, to update all components that are integrated with Windows SBS, you must select those specific products. For example, select Exchange Server 2003, SQL Server™, and Office 2003 (to update the Outlook component). Note If you add products like SQL Server, Exchange, or Office, you must synchronize the server at least one time. Step 5: Update and Configure the Client Computers WSUS client computers must be running a version of Automatic Updates that is compatible with WSUS. WSUS Setup automatically configures IIS to distribute the latest version of Automatic Updates to each client computer that contacts the server. Also, the default Web site on Windows SBS 2003 must be modified to enable WSUS client computers to self-update. The WSUS server setup installs two vroots, SelfUpdate and ClientWebService, and some files under the home directory of the default Web site (on port 80). This enables client computers to self-update through the default Web site. By default, the default Web site is configured to deny access to any IP address other than localhost or specific subnets attached to the server. This means that client computers that are not on localhost or on those specific subnets cannot self-update. To grant access to these client computers, complete the following steps on the default Web site’s SelfUpdate and ClientWebService virtual directory. 21 To grant access to the client computers to self-update 1. In Server Management, expand Advanced Management, expand Internet Information Services, expand Web Sites, expand Default Web Site, right-click the Selfupdate virtual directory, and then select Properties. 2. Click Directory Security. 3. Under IP address and domain name restrictions, click Edit, and then click Granted Access. 4. Click OK, right-click the ClientWebService virtual directory, and then select Properties. 5. Click Directory Security. 6. Under IP address and domain name restrictions, click Edit, and then click Granted Access. Note Most versions of Automatic Updates automatically self-update to the WSUScompatible version when you point them to the WSUS server. But the version of Automatic Updates that is included with Windows XP without any service packs cannot automatically self-update. If you have Windows XP without any service packs in your environment and you have never used Software Update Services (SUS), you should install Windows XP Service Pack 2, which includes the version of Automatic Updates that is compatible with WSUS. If you cannot do this, see “Deploying Microsoft Windows Server Update Services” at the Microsoft Web site (http://go.microsoft.com/fwlink/?linkid=41171) for other options. Because that WSUS client computers update themselves automatically, you only need to configure and point client computers to the WSUS server. To configure Automatic Updates, create a new Group Policy object (GPO) for WSUS settings and then link that GPO on the domain level. Next, add all of your WSUS settings by editing the GPO you just created. For more information about Group Policy, see the Microsoft Web site at http://go.microsoft.com/fwlink/?LinkID=47375. Step 5 contains the following procedures: Create and link a GPO on the domain level. Configure Automatic Updates. Point client computers to your WSUS server. Disable auto-restarts for scheduled update installations (optional). 22 Manually initiate detection on the client computer (optional). To create and link a GPO on the domain level, and then open the new GPO in Group Policy Object Editor 1. In Server Management, expand Advanced Management, expand Group Policy Management, expand Forest, expand Domains, and then click your SBS domain. Server Management 2. Right-click your SBS domain, and then select Create and Link a GPO Here. 3. In the Name box, type WSUS, and then click OK. 4. Right-click the new WSUS GPO, and then click Edit. Group Policy Object Editor 23 The following policy setting configures Automatic Updates to install updates on a schedule. You must enable this policy setting. To configure the behavior of Automatic Updates 1. In Group Policy Object Editor, expand Computer Configuration, expand Administrative Templates, expand Windows Components, and then click Windows Update. 2. In the details pane, double-click Configure Automatic Updates. 3. Click Enabled, and then select Auto download and schedule the install. 4. Accept the default values for when installations should take place (every day at 3 AM), and then click OK. The following policy setting configures Automatic Updates to use your WSUS server. You must enable this policy setting. To point the client computer to your WSUS server 1. In Group Policy Object Editor, expand Computer Configuration, expand 24 Administrative Templates, expand Windows Components, and then click Windows Update. 2. In the details pane, double-click Specify intranet Microsoft update service location. 3. Click Enabled, and then type the HTTP URL of the same WSUS server in the Set the intranet update service for detecting updates box and in the Set the intranet statistics server box. For example, type http://sbs-servername:8530 in both boxes. 4. Click OK. The following policy setting prevents Automatic Updates from restarting the computer automatically if an update requires it. If you enable this policy setting, be aware that an update that requires a restart cannot take effect until you manually restart the computer. This policy setting is optional. To disable automatic restart for scheduled update installations 1. In Group Policy Object Editor, expand Computer Configuration, expand Administrative Templates, expand Windows Components, and then click Windows Update. 2. In the details pane, double-click No auto-restart for scheduled Automatic Updates installations. 3. Click Enabled, and then click OK. You have to wait for Group Policy to refresh for the settings to take effect. By default, Group Policy refreshes in the background every 90 minutes, with a random offset of 0 to 30 minutes. If you want to refresh Group Policy sooner, you can go to a command prompt on the client computer and type: gpupdate /force. Note On client computers running Windows 2000, you can type the following at a command prompt: secedit /refreshpolicy machine_policy /enforce After Group Policy refreshes, it can take up to 20 minutes before client computers appear on the Computers page in the WSUS console. If you initiate detection manually, you do not have to wait 20 minutes for the client computer to contact WSUS. 25 To manually initiate detection by the WSUS server 1. On the client computer click Start, and then click Run. 2. Type cmd, and then click OK. 3. At the command prompt, type wuauclt.exe /detectnow. This command-line option instructs Automatic Updates to contact the WSUS server immediately. Note Only the WSUS compatible client can use the /detectnow option. The WSUS compatible client comes with Windows 2000 Service Pack 4, Windows XP Service Pack 2, and Windows Server 2003 Service Pack 1. Otherwise, Automatic Updates self-updates to the WSUS compatible client. Step 6: Create Computer Groups Computer groups are an important part of a WSUS deployment, even a basic deployment. Computer groups enable you to target updates to specific computers. There are two default computer groups: All Computers and Unassigned Computers. By default, when each client computer initially contacts the WSUS server, the server adds it to both these groups. You can create custom computer groups. Two benefits of creating computer groups are that they enable you to manage server computers differently from workstation computers and they enable you to test updates. There is no limit to the number of custom groups you can create. Setting up computer groups is a three-step process. First, you specify how you are going to assign computers to the computer groups. There are two options: server-side targeting and client-side targeting. Server-side targeting involves manually adding each computer to its group by using WSUS. Client-side targeting involves automatically adding the client computers by using either Group Policy or the registry. Second, you create the computer group in WSUS. Third, you move the computers into groups by using whichever method you chose in the first step. This paper explains how to use server-side targeting and manually move computers to their groups by using the WSUS console. If you had numerous client computers to assign to computer groups you could use client-side targeting, which would automate moving computers into computer groups. You can use Step 6 to set up a server and a client-computer group. This step contains the following procedures: 26 Specify server-side targeting. Create a Server and a Client group. Move computers to the appropriate group. To specify the method for assigning computers to groups 1. On the WSUS console toolbar, click Options, and then click Computer Options. 2. In the Computer Options box, click Use the Move computers task in Windows Server Update Services. 3. Under Tasks, click Save settings, and then click OK when the confirmation dialog box appears. To create a Server and a Client group 1. On the WSUS console toolbar, click Computers. 2. Under Tasks, click Create a computer group. 3. In the Group name box, type Server, and then click OK. 4. Under Tasks, click Create a computer group. 5. In the Group name box, type Client, and then click OK. To manually add a computer to a group 1. On the WSUS console toolbar, click Computers. 2. In the Groups box, click the group of the computer you want to move. 3. In the list of computers, click the computer you want to move. 4. Under Tasks, click Move the selected computer. 5. In the Computer group list, select the group you want to move the computer to, and then click OK. Step 7: Approve Updates In this step you approve updates for any computers in the Server or the 27 Client computer groups. Computers in these groups will check in with the WSUS server over the next 24 hours. After this period, you can use the WSUS reporting feature to determine whether those updates have been installed. Step 7 contains the following procedures: Approve and deploy an update. Check the Status of Updates report. To approve and deploy an update 1. On the WSUS console toolbar, click Updates. By default, the list of updates is filtered to show only Critical and Security Updates that have been approved for detection by client computers. Use the default filter for this procedure. 2. On the list of updates, select the updates you want to approve for installation. Information about a selected update is available on the Details tab. To select multiple contiguous updates, press and hold down the SHIFT key while selecting; to select multiple non-contiguous updates, press and hold down the CTRL key while selecting. 3. Under Update Tasks, click Change approval. The Approve Updates dialog box appears. 4. In the Group approval settings for the selected updates list, click the default value in the Approval column for the group you intend to work with, and then click Install. 5. Click OK. Note There are many options associated with approving updates, such as setting deadlines and uninstalling updates. These are discussed in “Microsoft Windows Server Update Services Operations Guide,” which is available at the Microsoft Web site (http://go.microsoft.com/fwlink/?linkid=41171). After 24 hours, you can use the WSUS reporting feature to determine whether those updates have been installed. To check the Status of Updates report 1. On the WSUS console toolbar, click Reports. 2. On the Reports page, click Status of Updates. 3. If you want to filter the list of updates, under View, select the criteria you want to 28 use, and then click Apply. 4. If you want to see the status of an update by computer group and then by computer, expand the view of the update as necessary. 5. If you want to print the Status of Updates report, under Tasks, click Print report. The Linux Networking Overview HOWTO Daniel Lopez Ridruejo, [email protected] v0.32, 8 July 2000 The purpose of this document is to give an overview of the networking capabilities of the Linux Operating System and to provide pointers for further information and implementation details. 1. Introduction 2. Linux. 2.1 What is Linux? 2.2 What makes Linux different? 3. Networking protocols 3.1 TCP/IP 3.2 TCP/IP version 6 3.3 IPX/SPX 29 3.4 AppleTalk Protocol Suite 3.5 WAN Networking: X.25, Frame-relay, etc... 3.6 ISDN 3.7 PPP, SLIP, PLIP 3.8 Amateur Radio 3.9 ATM 4. Networking hardware supported 5. File Sharing and Printing 5.1 Apple environment 5.2 Windows Environment 5.3 Novell Environment 5.4 Unix Environment 6. Internet/Intranet 6.1 Mail 6.2 Web Servers 6.3 Web Browsers 6.4 FTP Servers and clients 6.5 News service 6.6 Domain Name System 6.7 DHCP, bootp 6.8 NIS 6.9 Authentication 30 7. Remote execution of applications 7.1 Telnet 7.2 Remote commands 7.3 The X Window System 7.4 VNC 8. Network Interconnection 8.1 Router 8.2 Bridge 8.3 IP Masquerade 8.4 IP Accounting 8.5 IP aliasing 8.6 Traffic Shaping 8.7 Firewall 8.8 Port forwarding 8.9 Load Balancing 8.10 EQL 8.11 Proxy Server 8.12 Diald on demand 8.13 Tunnelling, mobile IP and virtual private networks 9. Network Management 9.1 Network management applications 9.2 SNMP 31 10. Enterprise Linux Networking 10.1 High Availability 10.2 RAID 10.3 Redundant networking 11. Sources of Information 12. Document history 13. Acknowledgements and disclaimer 1. Introduction The purpose of this document is to give an overview of the networking capabilities of the Linux operating system. Although one of the strengths of Linux is that plenty of information exists for nearly every component of it, most of this information is focused on implementation. New Linux users, particularly those coming from a Windows environment, are often unaware of the networking possibilities of Linux. This document aims to show a general picture of such possibilities with a brief description of each one and pointers for further information. The information has been gathered from many sources: HOWTOs, faqs, projects' web pages and my own hands-on experience. Full credit is given to the authors of these other sources. Without them and their programs this document would have not been possible or necessary. 32 2. Linux. 2.1 What is Linux? The primary author of Linux is Linus Torvalds. Since his original versions, it has been improved by countless numbers of people. It is a clone, written entirely from scratch, of the Unix operating system. One of the more interesting facts about Linux is that its development occurs simultaneously around the world. Linux has been copyrighted under the terms of the GNU General Public License (GPL). This is a license written by the Free Software Foundation (FSF) that is designed to prevent people from restricting the distribution of software. In brief, it says that although money can be charged for a copy, the person who received the copy can not be prevented from giving it away for free. It also means that the source code must be available. This is useful for programmers. Anybody can modify Linux and even distribute his/her modifications, provided that they keep the code under the same copyright. 2.2 What makes Linux different? Why work on Linux? Linux is generally cheaper (or at least no more expensive) than other operating systems and is frequently less problematic than many commercial systems. But what makes Linux different is not its price (after all, why would anyone want an OS - even a free one - if it is not good enough?) but its outstanding capabilities: Linux is a true 32-bit multitasking operating system, robust and capable enough to be used in organizations ranging from universities to large corporations. It runs on hardware ranging from low-end 386 boxes to massive ultra-parallel machines in research centres. Out-of-the-box versions are available for Intel, Sparc, and Alpha architectures, and experimental support exists for Power PC and embedded systems, among others such as SGI, Ultra Sparc, AP1000+, Strong ARM, and MIPS R3000/R4000. Finally, when it comes to networking, Linux is choice. Not only because networking is tightly integrated with the OS itself and a plethora of applications is freely available, but for the robustness under heavy loads that can only be achieved after years of debugging and testing in an Open Source project. 33 3. Networking protocols Linux supports many different networking protocols: 3.1 TCP/IP The Internet Protocol was originally developed two decades ago for the United States Department of Defense (DoD), mainly for the purpose of interconnecting different-brand computers. The TCP/IP suite of protocols allowed, through its layered structure, to insulate applications from networking hardware. Although it is based on a layered model, it is focused more on delivering interconnectivity than on rigidly adhering to functional layers. This is one of the reasons why TCP/IP has become the de facto standard internetworking protocol as opposed to OSI. TCP/IP networking has been present in Linux since its beginnings. It has been implemented from scratch. It is one of the most robust, fast and reliable implementations and is one of the key factors of the success of Linux. Related HOWTO: http://metalab.unc.edu/mdw/HOWTO/NET3-4-HOWTO.html 3.2 TCP/IP version 6 IPv6, sometimes also referred to as IPng (IP Next Generation) is an upgrade to the IPv4 protocol in order to address many issues. These issues include: shortage of available IP addresses, lack of mechanisms to handle time-sensitive traffic, lack of network layer security, etc. The larger name space will be accompanied by an improved addressing scheme, which will have a great impact on routing performance. A beta implementation exists for Linux, and a production version is expected for the 2.2.0 Linux kernel release. Linux IPv6 HOWTO: http://www.wcug.wwu.edu/ipv6/faq/ 3.3 IPX/SPX IPX/SPX (Internet Packet Exchange/Sequenced Packet Exchange) is a proprietary protocol stack developed by Novell and based on the Xerox Network Systems (XNS) protocol. IPX/SPX became prominent during the early 1980s as an integral part of Novell, Inc.'s NetWare. NetWare became the de facto standard network 34 operating system (NOS) of first generation LANs. Novell complemented its NOS with a business-oriented application suite and client-side connection utilities. Linux has a very clean IPX/SPX implementation, allowing it to be configured as an: IPX router IPX bridge NCP client and/or NCP Server (for sharing files) Novell Print Client, Novell Print Server And to: Enable PPP/IPX, allowing a Linux box to act as a PPP server/client Perform IPX tunnelling through IP, allowing the connection of two IPX networks through an IP only link Additionally, Caldera offers commercial support for Novell NetWare under Linux. Caldera provides a fully featured Novell NetWare client built on technology licensed from Novell Corporation. The client provides full client access to Novell 3.x and 4.x fileservers and includes features such as NetWare Directory Service (NDS) and RSA encryption. IPX HOWTO: http://metalab.unc.edu/mdw/HOWTO/IPX-HOWTO.html 3.4 AppleTalk Protocol Suite Appletalk is the name of Apple's internetworking stack. It allows a peer-to-peer network model which provides basic functionality such as file and printer sharing. Each machine can simultaneously act as a client and a server, and the software and hardware necessary are included with every Apple computer. Linux provides full Appletalk networking. Netatalk is a kernel-level implementation of the AppleTalk Protocol Suite, originally for BSD-derived systems. It includes support for routing AppleTalk, serving Unix and AFS filesystems over AFP (AppleShare), serving Unix printers and accessing AppleTalk printers over PAP. See section 5.1 for more information. 35 3.5 WAN Networking: X.25, Frame-relay, etc... Several third parties provide T-1, T-3, X.25 and Frame Relay products for Linux. Generally special hardware is required for these types of connections. Vendors that provide the hardware also provide the drivers with protocol support. WAN resources for Linux: http://www.secretagent.com/networking/wan.html 3.6 ISDN The Linux kernel has built-in ISDN capabilies. Isdn4linux controls ISDN PC cards and can emulate a modem with the Hayes command set ("AT" commands). The possibilities range from simply using a terminal program to connections via HDLC (using included devices) to full connection to the Internet with PPP to audio applications. FAQ for isdn4linux: http://ww.isdn4linux.de/faq/ 3.7 PPP, SLIP, PLIP The Linux kernel has built-in support for PPP (Point-to-Point-Protocol), SLIP (Serial Line IP) and PLIP (Parallel Line IP). PPP is the most popular way individual users access their ISPs (Internet Service Providers). PLIP allows the cheap connection of two machines. It uses a parallel port and a special cable, achieving speeds of 10kBps to 20kBps. Linux PPP HOWTO PPP/SLIP emulator PLIP information can be found in The Network Administrator Guide 3.8 Amateur Radio The Linux kernel has built-in support for amateur radio protocols. Especially interesting is the AX.25 support. The AX.25 protocol offers both connected and connectionless modes of operation, and is used either by itself for point-point links, or to carry other protocols such as TCP/IP and NetRom. It is similar to X.25 level 2 in structure, with some extensions to make it more useful in the amateur radio environment. 36 Amateur radio on Linux web site 3.9 ATM ATM support for Linux is currently in pre-alpha stage. There is an experimental release, which supports raw ATM connections (PVCs and SVCs), IP over ATM, LAN emulation... Linux ATM-Linux home page 4. Networking hardware supported Linux supports a great variety of networking hardware, including some obsolete equipment. Some interesting documents: Hardware HOWTO Ethernet HOWTO 5. File Sharing and Printing The primary purpose of many PC based Local Area Networks is to provide file and printer sharing services to the users. Linux as a corporate file and print server turns out to be a great solution. 5.1 Apple environment As outlined in previous sections, Linux supports the Appletalk family of protocols. Linux netatalk allows Macintosh clients to see Linux Systems as another Macintosh on the network, share files and use printers connected to Linux servers. Netatalk faq and HOWTO: http://thehamptons.com/anders/netatalk/ http://www.umich.edu/~rsug/netatalk/ 37 http://www.umich.edu/~rsug/netatalk/faq.html 5.2 Windows Environment Samba is a suite of applications that allow most Unices (and in particular Linux) to integrate into a Microsoft network both as a client and a server. Acting as a server it allows Windows 95, Windows for Workgroups, DOS and Windows NT clients to access Linux files and printing services. It can completely replace Windows NT for file and printing services, including the automatic downloading of printer drivers to clients. Acting as a client allows the Linux workstation to mount locally exported windows file shares. According to the SAMBA Meta-FAQ: "Many users report that compared to other SMB implementations Samba is more stable, faster, and compatible with more clients. Administrators of some large installations say that Samba is the only SMB server available which will scale to many tens of thousands of users without crashing" Samba project home page SMB HOWTO Printing HOWTO 5.3 Novell Environment As stated in previous sections, Linux can be configured to act as an NCP client or server, thus allowing file and printing services over a Novell network for both Novell and Unix clients. IPX HOWTO 5.4 Unix Environment The preferred way to share files in a Unix networking environment is through NFS. NFS stands for Network File Sharing and it is a protocol originally developed by Sun Microsystems. It is a way to share files between machines as if 38 they were local. A client "mounts" a filesystem "exported" by an NFS server. The mounted filesystem will appear to the client machine as if it was part of the local filesystem. It is possible to mount the root filesystem at startup time, thus allowing diskless clients to boot up and access all files from a server. In other words, it is possible to have a fully functional computer without a hard disk. Coda is a network filesystem (like NFS) that supports disconnected operation, persistant caching, among other goodies. It's included in 2.2.x kernels. Really handy for slow or unreliable networks and laptops. NFS-related documents: http://metalab.unc.edu/mdw/HOWTO/mini/NFS-Root.html http://metalab.unc.edu/mdw/HOWTO/Diskless-HOWTO.html http://metalab.unc.edu/mdw/HOWTO/mini/NFS-Root-Client-miniHOWTO/index.html http://www.redhat.com/support/docs/rhl/NFS-Tips/NFS-Tips.html http://metalab.unc.edu/mdw/HOWTO/NFS-HOWTO.html CODA can be found at: 6. Internet/Intranet Linux is a great platform to act as an Intranet / Internet server. The term Intranet refers to the application of Internet technologies inside an organisation mainly for the purpose of distributing and making available information inside the company. Internet and Intranet services offered by Linux include mail, news, WWW servers and many more that will be outlined in the next sections. 6.1 Mail Mail servers Sendmail is the de facto standard mail server program (called an MTA, or Mail Transport Agent) for Unix platforms. It is robust, scalable, and properly configured and with the necessary hardware, can handle loads of thousands of 39 users without blinking. Alternative mail servers, such as smail and qmail, are also available. Sendmail web site Smail faq Qmail web site Mail HOWTOs: http://metalab.unc.edu/mdw/HOWTO/Mail-User-HOWTO.html http://metalab.unc.edu/mdw/HOWTO/mini/Qmail+MH.html http://metalab.unc.edu/mdw/HOWTO/mini/Sendmail+UUCP.html http://metalab.unc.edu/mdw/HOWTO/mini/Mail-Queue.html Remote access to mail In an organisation or ISP, users will likely access their mail remotely from their desktops. Several alternatives exist in Linux, including POP (Post Office Protocol) and IMAP (Internet Message Access Protocol) servers. The POP protocol is usually used to transfer messages from the server to the client. IMAP permits also manipulation of the messages in the server, remote creation and deletion of folders in the server, concurrent access to shared mail folders, etc. Brief comparison IMAP and POP Mail related HOWTOs: http://metalab.unc.edu/mdw/HOWTO/Mail-User-HOWTO.html http://metalab.unc.edu/mdw/HOWTO/Cyrus-IMAP.html Mail User Agents There are a number of MUA (Mail User Agents) in Linux, both graphical and text mode. The most widely used ones include: pine, elm, mutt and Netscape. List of mail related software http://metalab.unc.edu/mdw/HOWTO/mini/TkRat.html 40 Mailing list software There are many MLM (Mail List Management) programs available for Unix in general and for Linux in particular. A good comparison of existing MLMs may be found at: ftp://ftp.uu.net/usenet/news.answers/mail/list-admin/ Listserv Majordomo home page Fetchmail One userful mail-related utility is fetchmail. Fetchmail is a free, full-featured, robust, well-documented remote-mail retrieval and forwarding utility intended to be used over on-demand TCP/IP links (such as SLIP or PPP connections). It supports every remote-mail protocol now in use on the Internet. It can even support IPv6 and IPSEC. Fetchmail retrieves mail from remote mail servers and forwards it via SMTP, so it can then be be read by normal mail user agents such as mutt, elm or BSD Mail. It allows all the system MTA's filtering, forwarding, and aliasing facilities to work just as they would on normal mail. Fetchmail can be used as a POP/IMAP-to-SMTP gateway for an entire DNS domain, collecting mail from a single drop box on an ISP and SMTP-forwarding it based on header addresses. A small company may centralise its mail in a single mailbox, configure fetchmail to collect all outgoing mail, send it via a single mailbox at their ISP and retrieve all incoming mail from the same mailbox. Fetchmail home page 6.2 Web Servers Most Linux distributions include Apache. Apache is the number one server on the internet according to http://www.netcraft.co.uk/survey/ . More than a half of all internet sites are running Apache or one of it derivatives. Apache's advantages include its modular design, stability and speed. Given the appropriate hardware and configuration it can support the highest loads: Yahoo, Altavista, GeoCities, and Hotmail are based on customized versions of this server. 41 Optional support for SSL (which enables secure transactions) is also available at: http://www.apache-ssl.org/ http://raven.covalent.net/ http://www.c2.net/ Related HOWTOs: http://metalab.unc.edu/mdw/HOWTO/WWW-HOWTO.html http://metalab.unc.edu/mdw/HOWTO/Virtual-Services-HOWTO.html http://metalab.unc.edu/mdw/HOWTO/Intranet-Server-HOWTO.html Web servers for Linux 6.3 Web Browsers A number of web browsers exist for the Linux platform. Netscape Navigator has been one of the choices from the very beginning and the upcoming Mozilla (http://www.mozilla.org) will have a Linux version. Another popular text based web browser is lynx. It is fast and handy when no graphical environment is available. Browser software for Linux http://metalab.unc.edu/mdw/HOWTO/mini/Public-Web-Browser.html 6.4 FTP Servers and clients FTP stands for File Transfer Protocol. An FTP server allows clients to connect to it and retrieve (download) files. Many ftp servers and clients exist for Linux and are included with most distributions. There are text-based clients as well as GUI based ones. FTP related software (servers and clients) for Linux may be found at: http://metalab.unc.edu/pub/Linux/system/network/file-transfer/ 6.5 News service Usenet (also known as news) is a big bulletin board system that covers all kinds of topics and it is organised hierarchically. A network of computers across the 42 internet (Usenet) exchange articles through the NNTP protocol. Several implementations exist for Linux, either for heavily loaded sites or for small sites receiving only a few newsgroups. INN home page Linux news related software 6.6 Domain Name System A DNS server has the job of translating names (readable by humans) to IP addresses. A DNS server does not know all the IP addresses in the world; rather, it is able to request other servers for the unknown addresses. The DNS server will either return the wanted IP address to the user or report that the name cannot be found in the tables. Name serving on Unix (and on the vast majority of the Internet) is done by a program called named. This is a part of the bind package of The Internet Software Consortium. BIND DNS HOWTO 6.7 DHCP, bootp DHCP and bootp are protocols that allow a client machine to obtain network information (such as their IP number) from a server. Many organisations are starting to use it because it eases network administration, especially in large networks or networks which have lots of mobile users. Related documents: DHCP mini-HOWTO 6.8 NIS The Network Information Service (NIS) provides a simple network lookup service consisting of databases and processes. Its purpose is to provide information that has to be known throughout the network to all machines on the network. For example, it enables an administrator to allow users access to any 43 machine in a network running NIS without a password entry existing on each machine; only the main database needs to be maintained. Related HOWTO: NIS HOWTO 6.9 Authentication There are also various ways of authenticating users in mixed networks. For Linux/Windows NT: http://www.mindware.com.au/ftp/smb-NT-verify.1.1.tar.gz The PAM (pluggable authentication module) which is a flexible method of Unix authentication: PAM library. Finally, LDAP in Linux 7. Remote execution of applications One of the most amazing features of Unix (yet one of the most unknown to new users) is its great support for remote and distributed execution of applications. 7.1 Telnet Telnet is a program that allows a person to use a remote computer as if that person were actually at the remote site. Telnet is one of the most powerful tools for Unix, allowing for true remote administration. It is also an interesting program from the point of view of users, because it allows remote access to all their files and programs from anywhere in the Internet. Combined with an X server, there is no difference (apart from the delay) between being at the console or on the other side of the planet. Telnet daemons and clients are available with most Linux distributions. Encrypted remote shell sessions are available through SSH ( http://www.ssh.fi/sshprotocols2/index.html) thus effectively allowing secure remote administration. Telnet related software 44 7.2 Remote commands In Unix, and in particular in Linux, remote commands exist that allow for interaction with other computers from the shell prompt. Examples are: rlogin, which allows for login in a remote machine in a similar way to telnet, rcp, which allows for the remote transfer of files among machines, etc. Finally, the remote shell command rsh allows the execution of a command on a remote machine without actually logging onto that machine. 7.3 The X Window System The X Window System was developed at MIT in the late 1980s, rapidly becoming the industry standard windowing system for Unix graphics workstations. The software is freely available, very versatile, and is suitable for a wide range of hardware platforms. Any X environment consists of two distinct parts, the X server and one or more X clients. It is important to realise the distinction between the server and the client. The server controls the display directly and is responsible for all input/output via the keyboard, mouse or display. The clients, on the other hand, do not access the screen directly - they communicate with the server, which handles all input and output. It is the clients which do the "real" computing work - running applications or whatever. The clients communicate with the server, causing the server to open one or more windows to handle input and output for that client. In short, the X Window System allows a user to log in into a remote machine, execute a process (for example, open a web browser) and have the output displayed on his own machine. Because the process is actually being executed on the remote system, very little CPU power is needed in the local one. Indeed, computers exist whose primary purpose is to act as pure X servers. Such systems are called X terminals. A free port of the X Window System exists for Linux and can be found at: Xfree. It is included in most Linux distributions. Related HOWTO: Remote X Apps HOWTO 7.4 VNC VNC stands for Virtual Network Computing. It is, in essence, a remote display system which allows one to view a computing 'desktop' environment not only on 45 the machine where it is running, but from anywhere on the Internet and from a wide variety of machine architectures. Both clients and servers exist for Linux as well as for many other platforms. It is possible to execute MS-Word in a Windows NT or 95 machine and have the output displayed in a Linux machine. The opposite is also true; it is possible to execute an application in a Linux machine and have the output displayed in any other Linux or Windows machine. One of the available clients is a Java applet, allowing the remote display to be run inside a web browser. Another client is a port for Linux using the SVGAlib graphics library, allowing 386s with as little as 4 MB of RAM to become fully functional X-Terminals. VNC web site 8. Network Interconnection Linux networking is rich in features. A Linux box can be configured so it can act as a router, bridge, etc... Some of the available options are described below. 8.1 Router The Linux kernel has built-in support for routing functions. A Linux box can act either as an IP or IPX router for a fraction of the cost of a commercial router. Recent kernels include special options for machines acting primarily as routers: Multicasting: Allows the Linux machine to act as a router for IP packets that have several destination addresses. It is needed on the MBONE, a high bandwidth network on top of the Internet which carries audio and video broadcasts. IP policy routing: Normally a router decides what to do with a received packet based solely on the packet's final destination address, but routing can also take into account the originating address and the network device from which the packet reached it. There are some related projects which include one aiming at building a complete, running Linux router on a floppy disk: Linux router project 8.2 Bridge The Linux kernel has built-in support for acting as an Ethernet bridge, which means that the different Ethernet segments it is connected to will appear as one 46 Ethernet to the participants. Several bridges can work together to create even larger networks of Ethernets using the IEEE802.1 spanning tree algorithm. As this is a standard, Linux bridges will interoperate properly with other third party bridge products. Additional packages allow filtering based on IP, IPX or MAC addresses. Related HOWTOs: Bridge+Firewall Bridge 8.3 IP Masquerade IP Masquerade is a developing networking function in Linux. If a Linux host is connected to the Internet with IP Masquerade enabled, then computers connecting to it (either on the same LAN or connected with modems) can reach the Internet as well, even though they have no officially assigned IP addresses. This allows for reduction of costs, since many people may be able to access the Internet using a single modem connection as well as contributes to increased security (in some way the machine is acting as a firewall, since unofficially assigned addresses cannot be accessed outside of that network). IP masquerade related pages and documents: http://ipmasq.home.ml.org/ http://www.indyramp.com/masq/links.pfhtml http://metalab.unc.edu/mdw/HOWTO/IP-Masquerade-HOWTO.html 8.4 IP Accounting This option of the Linux kernel keeps track of IP network traffic, performs packet logging and produces some statistics. A series of rules may be defined so when a packet matches a given pattern, some action is performed: a counter is increased, it is accepted/rejected, etc. 8.5 IP aliasing This feature of the Linux kernel provides the possibility of setting multiple network addresses on the same low-level network device driver (e.g two IP 47 addresses in one Ethernet card). It is typically used for services that act differently based on the address they listen on (e.g. "multihosting" or "virtual domains" or "virtual hosting services". Related HOWTO: IP Aliasing HOWTO 8.6 Traffic Shaping The traffic shaper is a virtual network device that makes it possible to limit the rate of outgoing data flow over another network device. This is especially useful in scenarios such as ISPs, where it is desirable to control and enforce policies regarding how much bandwidth is used by each client. Another alternative (for web services only) may be certain Apache modules which restrict the number of IP connections by client or the bandwidth used. http://metalab.unc.edu/mdw/HOWTO/NET3-4-HOWTO-6.html#ss6.15 8.7 Firewall A firewall is a device that protects a private network from the public part (the internet as a whole). It is designed to control the flow of packets based on the source, destination, port and packet type information contained in each packet. Different firewall toolkits exist for Linux as well as built-in support in the kernel. Other firewalls are TIS and SOCKS. These firewall toolkits are very complete and combined with other tools allow blocking/redirection of all kinds of traffic and protocols. Different policies can be implemented via configuration files or GUI programs. TIS home page SOCKS Firewall HOWTO 8.8 Port forwarding An increasing number of web sites are becoming interactive by having cgi-bins or Java applets that access some database or other service. Since this access may 48 pose a security problem, the machine containing the database should not be directly connected to the Internet. Port Forwarding can provide an almost ideal solution to this access problem. On the firewall, IP packets that come in to a specific port number can be re-written and forwarded to the internal server providing the actual service. The reply packets from the internal server are re-written to make it appear that they came from the firewall. Port forwarding information may be found here 8.9 Load Balancing Demand for load balancing usually arises in database/web access when many clients make simultaneous requests to a server. It would be desirable to have multiple identical servers and redirect requests to the less loaded server. This can be achieved through Network Address Translation techniques (NAT) of which IP masquerading is a subset. Network administrators can replace a single server providing Web services - or any other application - with a logical pool of servers sharing a common IP address. Incoming connections are directed to a particular server using one load-balancing algorithm. The virtual server rewrites incoming and outgoing packets to give clients the appearance that only one server exists. Linux IP-NAT information may be found here 8.10 EQL EQL is integrated into the Linux kernel. If two serial connections exist to some other computer (this usually requires two modems and two telephone lines) and SLIP or PPP (protocols for sending Internet traffic over telephone lines) are used on them, it is possible to make them behave like one double speed connection using this driver. Naturally, this has to be supported at the other end as well. http://metalab.unc.edu/mdw/HOWTO/NET3-4-HOWTO-6.html#ss6.2 8.11 Proxy Server The term proxy means "to do something on behalf of someone else." In networking terms, a proxy server computer can act on the behalf of several clients. An HTTP proxy is a machine that receives requests for web pages from another machine (Machine A). The proxy gets the page requested and returns the 49 result to Machine A. The proxy may have a cache with the requested pages, so if another machine asks for the same page the copy in the cache will be returned instead. This allows efficient use of bandwidth resources and less response time. As a side effect, as client machines are not directly connected to the outside world this is a way of securing the internal network. A well-configured proxy can be as effective as a good firewall. Several proxy servers exist for Linux. One popular solution is the Apache proxy module. A more complete and robust implementation of an HTTP proxy is SQUID. Apache Squid 8.12 Diald on demand The purpose of dial on demand is to make it transparently appear that the users have a permanent connection to a remote site. Usually, there is a daemon who monitors the traffic of packets and where an interesting packet (interesting is defined usually by a set of rules/priorities/permissions) arrives it establishes a connection with the remote end. When the channel is idle for a certain period of time, it drops the connection. Diald HOWTO 8.13 Tunnelling, mobile IP and virtual private networks The Linux kernel allows the tunnelling (encapsulation) of protocols. It can do IPX tunnelling through IP, allowing the connection of two IPX networks through an IP only link. It can also do IP-IP tunnelling, which it is essential for mobile IP support, multicast support and amateur radio. (see http://metalab.unc.edu/mdw/HOWTO/NET3-4-HOWTO-6.html#ss6.8) Mobile IP specifies enhancements that allow transparent routing of IP datagrams to mobile nodes in the Internet. Each mobile node is always identified by its home address, regardless of its current point of attachment to the Internet. While situated away from its home, a mobile node is also associated with a care-of address, which provides information about its current point of attachment to the Internet. The protocol provides for registering the care-of address with a home 50 agent. The home agent sends datagrams destined for the mobile node through a tunnel to the care-of address. After arriving at the end of the tunnel, each datagram is then delivered to the mobile node. Point-to-Point Tunneling Protocol (PPTP) is a networking technology that allows the use of the Internet as a secure virtual private network (VPN). PPTP is integrated with the Remote Access Services (RAS) server which is built into Windows NT Server. With PPTP, users can dial into a local ISP, or connect directly to the Internet, and access their network as if they were at their desks. PPTP is a closed protocol and its security has recently being compromised. It is highly recomendable to use other Linux based alternatives, since they rely on open standards which have been carefully examined and tested. A client implementation of the PPTP for Linux is available here More on Linux PPTP can be found here Mobile IP: http://www.hpl.hp.com/personal/Jean_Tourrilhes/MobileIP/mip.html http://metalab.unc.edu/mdw/HOWTO/NET3-4-HOWTO-6.html#ss6.12 Virtual Private Networks related documents: http://metalab.unc.edu/mdw/HOWTO/mini/VPN.html http://sites.inka.de/sites/bigred/devel/cipe.html 9. Network Management 9.1 Network management applications There is an impressive number of tools focused on network management and remote administration. Some interesting remote administration projects are linuxconf and webmin: Webmin Linuxconf Other tools include network traffic analysis tools, network security tools, monitoring tools, configuration tools, etc. An archive of many of these tools may be found at Metalab 51 9.2 SNMP The Simple Network Management Protocol is a protocol for Internet network management services. It allows for remote monitoring and configuration of routers, bridges, network cards, switches, etc... There is a large amount of libraries, clients, daemons and SNMP based monitoring programs available for Linux. A good page dealing with SNMP and Linux software may be found at : http://linas.org/linux/NMS.html 10. Enterprise Linux Networking In certain situations it is necessary for the networking infrastructure to have proper mechanisms to guarantee network availability nearly 100% of the time. Some related techniques are described in the following sections. Most of the following material can be found at the excellent Linas website: http://linas.org/linux/index.html and in the Linux High-Availability HOWTO 10.1 High Availability Redundancy is used to prevent the overall IT system from having single points of failure. A server with only one network card or a single SCSI disk has two single points of failure. The objective is to mask unplanned outages from users in a manner that lets users continue to work quickly. High availability software is a set of scripts and tools that automatically monitor and detect failures, taking the appropriate steps to restore normal operation and to notifying system administrators. 10.2 RAID RAID, short for Redundant Array of Inexpensive Disks, is a method whereby information is spread across several disks, using techniques such as disk striping (RAID Level 0) and disk mirroring (RAID level 1) to achieve redundancy, lower latency and/or higher bandwidth for reading and/or writing, and recoverability from hard-disk crashes. Over six different types of RAID configurations have been defined. There are three types of RAID solution options available to Linux users: software RAID, outboard DASD boxes, and RAID disk controllers. 52 Software RAID: Pure software RAID implements the various RAID levels in the kernel disk (block device) code. Outboard DASD Solutions: DASD (Direct Access Storage Device) are separate boxes that come with their own power supply, provide a cabinet/chassis for holding the hard drives, and appear to Linux as just another SCSI device. In many ways, these offer the most robust RAID solution. RAID Disk Controllers: Disk Controllers are adapter cards that plug into the ISA/EISA/PCI bus. Just like regular disk controller cards, a cable attaches them to the disk drives. Unlike regular disk controllers, the RAID controllers will implement RAID on the card itself, performing all necessary operations to provide various RAID levels. Related HOWTOs: http://metalab.unc.edu/mdw/HOWTO/mini/DPT-Hardware-RAID.html http://metalab.unc.edu/mdw/HOWTO/Root-RAID-HOWTO.html http://metalab.unc.edu/mdw/HOWTO/Software-RAID-HOWTO.html RAID at linas.org: http://linas.org/linux/raid.html 10.3 Redundant networking IP Address Takeover (IPAT). When a network adapter card fails, its IP address should be taken by a working network card in the same node or in another node. MAC Address Takeover: when an IP takeover occurs, it should be made sure that all the nodes in the network update their ARP caches (the mapping between IP and MAC addresses). See the High-Availability HOWTO for more details: http://metalab.unc.edu/pub/Linux/ALPHA/linux-ha/High-Availability-HOWTO.html 11. Sources of Information If you have networking problems with Linux, please do not e-mail the questions to me. I just simply do not have the time to answer them. You have better chances to obtain help if you post a question in the comp.os.linux.networking newsgroup (which you can access through http://www.dejanews.com). Before posting there, make sure that you have read the relevant documentation. Then search the news 53 archive, because chances are that somebody, sometime made the same question (and somebody answered). When posting, remember to explain all the steps you have followed and the error messages you got. Where to get further information: Linux: http://www.linux.org Linux Documentation Project: http://metalab.unc.edu/mdw/linux.html (check out the Linux Network Administrator Guide) Freshmeat: The latest releases of Linux Software. http://www.freshmeat.net Linux links: http://www.linuxlinks.com/Networking/ 12. Document history 0.32 Updated many links that have changed. Special thanks go here to Kontiki for his careful review and detailed description of what needed to change. Many thanks also to Anne and Mathias who pointed out other links that were no longer valid. 0.31 (17 Sept 1999) Changed address for linux router project (thanks to John Ellis) and added another PPTP link (thanks to Benjamin Smith) 0.30 (6 April 1999) Included section on CODA (thanks to Brian Ristuccia 0.2-0.29 Bugfixes :-) (see acknowledgements, at the end of this document) 0.1 (5 june 1998) 13. Acknowledgements and disclaimer This document is based on the work of many other people who have made it possible for Linux to be what it is now: one of the best network operating systems. All credit is theirs. A lot of effort has been put into this document to make it simple but accurate and complete but not excessively long. Nevertheless, no liability will be assumed by the author under any circumstance. Use the information contained here at your own risk. Please feel free to e-mail me suggestions, corrections or general comments about the document so I can improve it. Other topics that will probably be included in futures revisions of this document may include radius, web/ftp mirroring tools such as wget, traffic 54 analyzers, CORBA... and many others that may be suggested and suitable. You can reach me at [email protected]. Finally I would like to thank Finnbjorn av Teigum, Cesar Kant, Mathieu Arnold and specially Hisakuni Nogami and Phil Garcia for their careful reviews and comments on this HOWTO. Their help is greatly appreciated. You can find a version of this document at http://www.rawbyte.com/lno/. Daniel Lopez Ridruejo 8 July 2000 SquidFaq InterceptionProxy squid-cache wiki Navigation FrontPage RecentChanges FindPage HelpContents InterceptionProxy Page Immutable Page Discussion Info Attachments More Actions: Raw Text Do 55 Search fullsearch 180 Search: Titles Text User Login MoinMoin Powered Design by FrancescoChemolli (credits) Hosting donated by MessageNet Contents are © their respective authors, licensed under the Creative Commons Attribution Sharealike 2.5 License Or, How can I make my users' browsers use my cache without configuring the browsers for proxying? Contents 1. Concepts of Interception Caching 2. Requirements and methods for Interception Caching 3. Steps involved in configuring Interception Caching 1. Compile a version of Squid which accepts connections for other addresses 1. Choosing the right options to pass to ./configure 2. Configure Squid to accept and process the redirected port 80 connections 2. Getting your traffic to the right port on your Squid Cache 56 1. Interception Caching packet redirection for Solaris, SunOS, and BSD systems 1. Install IP Filter 2. Configure ipnat 2. Interception Caching packet redirection for OpenBSD PF 3. Get the packets from the end clients to your cache server 1. Interception Caching packet redirection with Cisco routers using policy routing (NON WCCP) 1. Shortcomings of the cisco ip policy route-map method 2. Interception Caching packet redirection with Foundry L4 switches 3. Interception Caching packet redirection with an Alcatel OmnySwitch 7700 4. Interception Caching packet redirection with Cabletron/Entrasys products 5. Interception Caching packet redirection with ACC Tigris digital access server 4. WCCP - Web Cache Coordination Protocol 1. Does Squid support WCCP? 2. Do I need a cisco router to run WCCP? 3. Can I run WCCP with the Windows port of Squid? 4. Where can I find out more about WCCP? 5. Cisco router software required for WCCP 1. IOS support in Cisco Routers 2. IOS support in Cisco Switches 3. Software support in Cisco Firewalls (PIX OS) 4. What about WCCPv2? 5. Configuring your router 6. Cache/Host configuration of WCCP 57 1. Configuring Squid to talk WCCP 2. Configuring FreeBSD 3. FreeBSD 4.8 and later 4. FreeBSD 6.x and later 5. Standard Linux GRE Tunnel 5. TProxy Interception 1. TProxy v2.2 2. TProxy v4.1+ 6. Other Configuration Examples 7. Complete 8. Troubleshooting and Questions 1. It doesn't work. How do I debug it? 2. Why can't I use authentication together with interception proxying? 3. Can I use ''proxy_auth'' with interception? 4. "Connection reset by peer" and Cisco policy routing 9. Further information about configuring Interception Caching with Squid 4. Issues with HotMail Concepts of Interception Caching Interception Caching goes under many names - Interception Caching, Transparent Proxying and Cache Redirection. Interception Caching is the process by which HTTP connections coming from remote clients are redirected to a cache server, without their knowledge or explicit configuration. There are some good reasons why you may want to use this technique: There is no client configuration required. This is the most popular reason for investigating this option. You can implement better and more reliable strategies to maintain client access in case of your cache infrastructure going out of service. 58 However there are also significant disadvantages for this strategy, as outlined by Mark Elsen: Intercepting HTTP breaks TCP/IP standards because user agents think they are talking directly to the origin server. Requires IPv4 with NAT. It causes path-MTU (PMTUD) to fail, possibly making some remote sites inaccessible. This is not usually a problem if your client machines are connected via Ethernet or DSL PPPoATM where the MTU of all links between the cache and client is 1500 or more. If your clients are connecting via DSL PPPoE then this is likely to be a problem as PPPoE links often have a reduced MTU (1472 is very common). On older IE versions before version 6, the ctrl-reload function did not work as expected. Proxy authentication does not work, and IP based authentication conceptually fails because the users are all seen to come from the Interception Cache's own IP address. You can't use IDENT lookups (which are inherently very insecure anyway) Interception Caching only supports the HTTP protocol, not gopher, SSL, or FTP. You cannot setup a redirection-rule to the proxy server for other protocols other than HTTP since it will not know how to deal with it. Intercepting Caches are incompatible with IP filtering designed to prevent address spoofing. Clients are still expected to have full Internet DNS resolving capabilities; in certain intranet/firewalling setups, this is not always wanted. Related to above: suppose the users browser connects to a site which is down. However, due to the transparent proxying, it gets a connected state to the interceptor. The end user may get wrong error messages or a hung browser, for seemingly unknown reasons to them. If you feel that the advantages outweigh the disadvantages in your network, you may choose to continue reading and look at implementing Interception Caching. Requirements and methods for Interception Caching You need to have a good understanding of what you are doing before you start. This involves understanding at a TCP layer what is happening to the 59 connections. This will help you both configure the system and additionally assist you if your end clients experience problems after you have deployed your solution. A current Squid (2.5+). You should run the latest version of Squid that is available at the time. A current OS may make things easier. Quite likely you will need a network device which can redirect the traffic to your cache. If your Squid box is also functioning as a router and all traffic from and to your network is in the path, you can skip this step. If your cache is a standalone box on a LAN that does not normally see your clients web browsing traffic, you will need to choose a method of redirecting the HTTP traffic from your client machines to the cache. This is typically done with a network appliance such as a router or Layer 3 switch which either rewrite the destination MAC address or alternatively encapsulate the network traffic via a GRE or WCCP tunnel to your cache. NB: If you are using Cisco routers and switches in your network you may wish to investigate the use of WCCP. WCCP is an extremely flexible way of redirecting traffic and is intelligent enough to automatically stop redirecting client traffic if your cache goes offline. This may involve you upgrading your router or switch to a release of IOS or an upgraded featureset which supports WCCP. There is a section written specifically on WCCP below. Steps involved in configuring Interception Caching Building a Squid with the correct options to ./configure to support the redirection and handle the clients correctly. Routing the traffic from port 80 to the port your Squid is configured to accept the connections on Decapsulating the traffic that your network device sends to Squid (only if you are using GRE or WCCP to intercept the traffic) Configuring your network device to redirect the port 80 traffic. The first two steps are required and the last two may or may not be required depending on how you intend to route the HTTP traffic to your cache. !It is critical to read the full comments in the squid.conf file and in this document in it's entirety before you begin. Getting Interception Caching to work with Squid 60 is non-trivial and requires many subsystems of both Squid and your network to be configured exactly right or else you will find that it will not work and your users will not be able to browse at all. You MUST test your configuration out in a non-live environment before you unleash this feature on your end users. Compile a version of Squid which accepts connections for other addresses Firstly you need to build Squid with the correct options to ./configure, and then you need to configure squid.conf to support Intercept Caching. Choosing the right options to pass to ./configure All supported versions of Squid currently available support Interception Caching, however for this to work properly, your operating system and network also need to be configured. For some operating systems, you need to have configured and built a version of Squid which can recognize the hijacked connections and discern the destination addresses. For Linux configure Squid with the --enable-linux-netfilter option. For *BSD-based systems with IP filter configure Squid with the -enable-ipf-transparent option. If you're using OpenBSD's PF configure Squid with --enable-pftransparent. Do a make clean if you previously configured without that option, or the correct settings may not be present. Squid-2.6+ and Squid-3.0+ support both WCCPv1 and WCCPv2 by default (unless explicitly disabled). Configure Squid to accept and process the redirected port 80 connections You have to change the Squid configuration settings to recognize the hijacked connections and discern the destination addresses. A number of different interception methods and their specific configuration is detailed at ConfigExamples/Intercept 61 You can usually manually configure browsers to connect to the IP address and port which you have specified as intercepted. The only drawback is that there will be a very slight (and probably unnoticeable) performance hit as a syscall done to see if the connection is intercepted. If no interception state is found it is processed just like a normal connection. Getting your traffic to the right port on your Squid Cache You have to configure your cache host to accept the redirected packets - any IP address, on port 80 - and deliver them to your cache application. This is typically done with IP filtering/forwarding features built into the kernel. On Linux 2.4 and above this is called iptables On FreeBSD its called ipfw. Other BSD systems may use ip filter, ipnat or pf. On most systems, it may require rebuilding the kernel or adding a new loadable kernel module. If you are running a modern Linux distribution and using the vendor supplied kernel you will likely not need to do any rebuilding as the required modules will have been built by default. Interception Caching packet redirection for Solaris, SunOS, and BSD systems You don't need to use IP Filter on FreeBSD. Use the built-in ipfw feature instead. See the FreeBSD subsection below. Install IP Filter First, get and install the IP Filter package. Configure ipnat Put these lines in /etc/ipnat.rules: # Redirect direct web traffic to local web server. rdr de0 1.2.3.4/32 port 80 -> 1.2.3.4 port 80 tcp # Redirect everything else to squid on port 8080 rdr de0 0.0.0.0/0 port 80 -> 1.2.3.4 port 8080 tcp 62 Modify your startup scripts to enable ipnat. For example, on FreeBSD it looks something like this: /sbin/modload /lkm/if_ipl.o /sbin/ipnat -f /etc/ipnat.rules chgrp nobody /dev/ipnat chmod 644 /dev/ipnat Thanks to Quinton Dolan. Interception Caching packet redirection for OpenBSD PF <After having compiled Squid with the options to accept and process the redirected port 80 connections enumerated above, either manually or with FLAVOR=transparent for /usr/ports/www/squid, one needs to add a redirection rule to pf (/etc/pf.conf). In the following example, sk0 is the interface on which traffic you want transparently redirected will arrive: i = "sk0" rdr on $i inet proto tcp from any to any port 80 -> $i port 3128 pass on $i inet proto tcp from $i:network to $i port 3128 Or, depending on how recent your implementation of PF is: i = "sk0" rdr pass on $i inet proto tcp to any port 80 -> $i port 3128 Also, see Daniel Hartmeier's page on the subject. Get the packets from the end clients to your cache server There are several ways to do this. First, if your proxy machine is already in the path of the packets (i.e. it is routing between your proxy users and the Internet) then you don't have to worry about this step as the Interception Caching should now be working. This would be true if you install Squid on a firewall machine, or on a UNIX-based router. If the cache is not in the natural path of the connections, then you have to divert the packets from the normal path to your cache host using a router or switch. 63 If you are using an external device to route the traffic to your Cache, there are multiple ways of doing this. You may be able to do this with a Cisco router using WCCP, or the "route map" feature. You might also use a so-called layer-4 switch, such as the Alteon ACE-director or the Foundry Networks ServerIron. Finally, you might be able to use a stand-alone router/load-balancer type product, or routing capabilities of an access server. Interception Caching packet redirection with Cisco routers using policy routing (NON WCCP) by John Saunders This works with at least IOS 11.1 and later. If your router is doing anything more complicated that shuffling packets between an ethernet interface and either a serial port or BRI port, then you should work through if this will work for you. First define a route map with a name of proxy-redirect (name doesn't matter) and specify the next hop to be the machine Squid runs on. ! route-map proxy-redirect permit 10 match ip address 110 set ip next-hop 203.24.133.2 ! Define an access list to trap HTTP requests. The second line allows the Squid host direct access so an routing loop is not formed. By carefully writing your access list as show below, common cases are found quickly and this can greatly reduce the load on your router's processor. ! access-list 110 deny tcp any any neq www access-list 110 deny tcp host 203.24.133.2 any access-list 110 permit tcp any any ! Apply the route map to the ethernet interface. ! 64 interface FastEthernet0/0 ip policy route-map proxy-redirect ! Shortcomings of the cisco ip policy route-map method notes that there is a Cisco bug relating to interception proxying using IP policy route maps, that causes NFS and other applications to break. Apparently there are two bug reports raised in Cisco, but they are not available for public dissemination. Bruce Morgan The problem occurs with o/s packets with more than 1472 data bytes. If you try to ping a host with more than 1472 data bytes across a Cisco interface with the access-lists and ip policy route map, the icmp request will fail. The packet will be fragmented, and the first fragment is checked against the access-list and rejected it goes the "normal path" as it is an icmp packet - however when the second fragment is checked against the access-list it is accepted (it isn't regarded as an icmp packet), and goes to the action determined by the policy route map! notes that you may be able to get around this bug by carefully writing your access lists. If the last/default rule is to permit then this bug would be a problem, but if the last/default rule was to deny then it won't be a problem. I guess fragments, other than the first, don't have the information available to properly policy route them. Normally TCP packets should not be fragmented, at least my network runs an MTU of 1500 everywhere to avoid fragmentation. So this would affect UDP and ICMP traffic only. John Basically, you will have to pick between living with the bug or better performance. This set has better performance, but suffers from the bug: access-list 110 deny tcp any any neq www access-list 110 deny tcp host 10.1.2.3 any access-list 110 permit tcp any any Conversely, this set has worse performance, but works for all protocols: access-list 110 deny tcp host 10.1.2.3 any access-list 110 permit tcp any any eq www access-list 110 deny tcp any any 65 Interception Caching packet redirection with Foundry L4 switches by at shreve dot net Brian Feeny. First, configure Squid for interception caching as detailed at the beginning of this section. Next, configure the Foundry layer 4 switch to redirect traffic to your Squid box or boxes. By default, the Foundry redirects to port 80 of your squid box. This can be changed to a different port if needed, but won't be covered here. In addition, the switch does a "health check" of the port to make sure your squid is answering. If you squid does not answer, the switch defaults to sending traffic directly thru instead of redirecting it. When the Squid comes back up, it begins redirecting once again. This example assumes you have two squid caches: squid1.foo.com 192.168.1.10 squid2.foo.com 192.168.1.11 We will assume you have various workstations, customers, etc, plugged into the switch for which you want them to be intercepted and sent to Squid. The squid caches themselves should be plugged into the switch as well. Only the interface that the router is connected to is important. Where you put the squid caches or ther connections does not matter. This example assumes your router is plugged into interface 17 of the switch. If not, adjust the following commands accordingly. Enter configuration mode: telnet@ServerIron#conf t Configure each squid on the Foundry: telnet@ServerIron(config)# server cache-name squid1 192.168.1.10 telnet@ServerIron(config)# server cache-name squid2 192.168.1.11 Add the squids to a cache-group: telnet@ServerIron(config)#server cache-group 1 telnet@ServerIron(config-tc-1)#cache-name squid1 telnet@ServerIron(config-tc-1)#cache-name squid2 66 Create a policy for caching http on a local port telnet@ServerIron(config)# ip policy 1 cache tcp http local Enable that policy on the port connected to your router telnet@ServerIron(config)#int e 17 telnet@ServerIron(config-if-17)# ip-policy 1 Since all outbound traffic to the Internet goes out interface 17 (the router), and interface 17 has the caching policy applied to it, HTTP traffic is going to be intercepted and redirected to the caches you have configured. The default port to redirect to can be changed. The load balancing algorithm used can be changed (Least Used, Round Robin, etc). Ports can be exempted from caching if needed. Access Lists can be applied so that only certain source IP Addresses are redirected, etc. This information was left out of this document since this was just a quick howto that would apply for most people, not meant to be a comprehensive manual of how to configure a Foundry switch. I can however revise this with any information necessary if people feel it should be included. Interception Caching packet redirection with an Alcatel OmnySwitch 7700 by Pedro A M Vazquez On the switch define a network group to be intercepted: policy network group MyGroup 10.1.1.0 mask 255.255.255.0 Define the tcp services to be intercepted: policy service web80 destination tcp port 80 policy service web8080 destination tcp port 8080 Define a group of services using the services above: policy service group WebPorts web80 web8080 And use these to create an intercept condition: policy condition WebFlow source network group MyGroup service group WebPorts Now, define an action to redirect the traffic to the host running squid: 67 policy action Redir alternate gateway ip 10.1.2.3 Finally, create a rule using this condition and the corresponding action: policy rule Intercept condition WebFlow action Redir Apply the rules to the QoS system to make them effective qos apply Don't forget that you still need to configure Squid and Squid's operating system to handle the intercepted connections. See above for Squid and OS-specific details. Interception Caching packet redirection with Cabletron/Entrasys products By Dave Wintrip, dave at purevanity dot net, June 3, 2004. I have verified this configuration as working on a Cabletron SmartSwitchRouter 2000, and it should work on any layer-4 aware Cabletron or Entrasys product. You must first configure Squid to enable interception caching, outlined earlier. Next, make sure that you have connectivity from the layer-4 device to your squid box, and that squid is correctly configured to intercept port 80 requests thrown it's way. I generally create two sets of redirect ACLs, one for cache, and one for bypassing the cache. This method of interception is very similar to Cisco's route-map. Log into the device, and enter enable mode, as well as configure mode. ssr> en Password: ssr# conf ssr(conf)# I generally create two sets of redirect ACLs, one for specifying who to cache, and one for destination addresses that need to bypass the cache. This method of interception is very similar to Cisco's route-map in this way. The ACL cache-skip is a list of destination addresses that we do not want to transparently redirect to squid. 68 ssr(conf)# acl cache-skip permit tcp any 192.168.1.100/255.255.255.255 any http The ACL cache-allow is a list of source addresses that will be redirected to Squid. ssr(conf)# acl cache-allow permit tcp 10.0.22.0/255.255.255.0 any any http Save your new ACLs to the running configuration. ssr(conf)# save a Next, we need to create the ip-policies that will work to perform the redirection. Please note that 10.0.23.2 is my Squid server, and that 10.0.24.1 is my standard default next hop. By pushing the cache-skip ACL to the default gateway, the web request is sent out as if the squid box was not present. This could just as easily be done using the squid configuration, but I would rather Squid not touch the data if it has no reason to. ssr(conf)# ip-policy cache-allow permit acl cache-allow next-hoplist 10.0.23.2 action policy-only ssr(conf)# ip-policy cache-skip permit acl cache-skip next-hoplist 10.0.24.1 action policy-only Apply these new policies into the active configuration. ssr(conf)# save a We now need to apply the ip-policies to interfaces we want to cache requests from. Assuming that localnet-gw is the interface name to the network we want to cache requests from, we first apply the cache-skip ACL to intercept requests on our do-not-cache list, and forward them out the default gateway. We then apply the cache-allow ACL to the same interface to redirect all other requests to the cache server. ssr(conf)# ip-policy cache-skip apply interface localnet-gw ssr(conf)# ip-policy cache-allow apply interface localnet-gw We now need to apply, and permanently save our changes. Nothing we have done before this point would effect anything without adding the ip-policy applications into the active configuration, so lets try it. ssr(conf)# save a ssr(conf)# save s 69 Provided your Squid box is correct configured, you should now be able to surf, and be transparently cached if you are using the localnet-gw address as your gateway. Some Cabletron/Entrasys products include another method of applying a web cache, but details on configuring that is not covered in this document, however is it fairly straight forward. Also note, that if your Squid box is plugged directly into a port on your layer-4 switch, and that port is part of its own VLAN, and its own subnet, if that port were to change states to down, or the address becomes uncontactable, then the switch will automatically bypass the ip-policies and forward your web request though the normal means. This is handy, might I add. Interception Caching packet redirection with ACC Tigris digital access server by John Saunders This is to do with configuring interception proxy for an ACC Tigris digital access server (like a CISCO 5200/5300 or an Ascend MAX 4000). I've found that doing this in the NAS reduces traffic on the LAN and reduces processing load on the CISCO. The Tigris has ample CPU for filtering. Step 1 is to create filters that allow local traffic to pass. Add as many as needed for all of your address ranges. ADD PROFILE IP FILTER ENTRY local1 INPUT 0.0.0.0 0.0.0.0 NORMAL 10.0.3.0 255.255.255.0 ADD PROFILE IP FILTER ENTRY local2 INPUT 0.0.0.0 0.0.0.0 NORMAL 10.0.4.0 255.255.255.0 Step 2 is to create a filter to trap port 80 traffic. ADD PROFILE IP FILTER ENTRY http INPUT 0.0.0.0 = 0x6 D= 80 NORMAL 0.0.0.0 0.0.0.0 0.0.0.0 Step 3 is to set the "APPLICATION_ID" on port 80 traffic to 80. This causes all packets matching this filter to have ID 80 instead of the default ID of 0. SET PROFILE IP FILTER APPLICATION_ID http 80 70 Step 4 is to create a special route that is used for packets with "APPLICATION_ID" set to 80. The routing engine uses the ID to select which routes to use. ADD IP ROUTE ENTRY 0.0.0.0 0.0.0.0 PROXY-IP 1 SET IP ROUTE APPLICATION_ID 0.0.0.0 0.0.0.0 PROXY-IP 80 Step 5 is to bind everything to a filter ID called transproxy. List all local filters first and the http one last. ADD PROFILE ENTRY transproxy local1 local2 http With this in place use your RADIUS server to send back the "Framed-Filter-Id = transproxy" key/value pair to the NAS. You can check if the filter is being assigned to logins with the following command: display profile port table WCCP - Web Cache Coordination Protocol Contributors: Glenn Chisholm, Lincoln Dale and ReubenFarrelly. WCCP is a very common and indeed a good way of doing Interception Caching as it adds additional features and intelligence to the traffic redirection process. WCCP is a dynamic service in which a cache engine communicates to a router about it's status, and based on that the router decides whether or not to redirect the traffic. This means that if your cache becomes unavailable, the router will automatically stop attempting to forward traffic to it and end users will not be affected (and likely not even notice that your cache is out of service). WCCPv1 is documented in the Internet-Draft draft-forster-wrec-wccp-v1-00.txt and WCCPv2 is documented in draft-wilson-wrec-wccp-v2-00.txt. For WCCP to work, you firstly need to configure your Squid Cache, and additionally configure the host OS to redirect the HTTP traffic from port 80 to whatever port your Squid box is listening to the traffic on. Once you have done this you can then proceed to configure WCCP on your router. 71 Does Squid support WCCP? Cisco's Web Cache Coordination Protocol V1.0 and WCCPv2 are both supported in all current versions of Squid. Do I need a cisco router to run WCCP? No. Originally WCCP support could only be found on cisco devices, but some other network vendors now support WCCP as well. If you have any information on how to configure non-cisco devices, please post this here. Can I run WCCP with the Windows port of Squid? Technically it may be possible, but we have not heard of anyone doing so. The easiest way would be to use a Layer 3 switch and doing Layer 2 MAC rewriting to send the traffic to your cache. If you are using a router then you will need to find out a way to decapsulate the GRE/WCCP traffic that the router sends to your Windows cache (this is a function of your OS, not Squid). Where can I find out more about WCCP? Cisco have some good content on their website about WCCP. One of the better documents which lists the features and describes how to configure WCCP on their routers can be found on there website here. There is also a more technical document which describes the format of the WCCP packets at Colasoft Cisco router software required for WCCP This depends on whether you are running a switch or a router. IOS support in Cisco Routers Almost all Cisco routers support WCCP provided you are running IOS release 12.0 or above, however some routers running older software require an upgrade to their software feature sets to a 'PLUS' featureset or better. WCCPv2 is supported on almost all routers in recent IPBASE releases. Cisco's Feature Navigator at http://www.cisco.com/go/fn runs an up to date list of which platforms support WCCPv2. 72 Generally you should run the latest release train of IOS for your router that you can. We do not recommend you run T or branch releases unless you have fully tested them out in a test environment before deployment as WCCP requires many parts of IOS to work reliably. The latest mainline 12.1, 12.2, 12.3 and 12.4 releases are generally the best ones to use and should be the most trouble free. Note that you will need to set up a GRE or WCCP tunnel on your cache to decapsulate the packets your router sends to it. IOS support in Cisco Switches High end Cisco switches support Layer 2 WCCPv2, which means that instead of a GRE tunnel transport, the ethernet frames have their next hop/destination MAC address rewritten to that of your cache engine. This is far faster to process by the hardware than the router/GRE method of redirection, and in fact on some platforms such as the 6500s may be the only way WCCP can be configured. L2 redirection is supposedly capable of redirecting in excess of 30 million PPS on the high end 6500 Sup cards. Cisco switches known to be able to do WCCPv2 include the Catalyst 3550 (very basic WCCP only), Catalyst 4500-SUP2 and above, and all models of the 6000/6500. Note that the Catalyst 2900, 3560, 3750 and 4000 early Supervisors do NOT support WCCP (at all). Layer 2 WCCP is a WCCPv2 feature and does not exist in cisco's WCCPv1 implementation. WCCPv2 Layer 2 redirection was added in 12.1E and 12.2S. It is always advisable to read the release notes for the version of software you are running on your switch before you deploy WCCP. Software support in Cisco Firewalls (PIX OS) Version 7.2(1) of the cisco PIX software now also supports WCCP, allowing you to do WCCP redirection with this appliance rather than having to have a router do the redirection. 7.2(1) has been tested and verified to work with Squid-2.6. 73 What about WCCPv2? WCCPv2 is a new feature to Squid-2.6 and Squid-3.0. WCCPv2 configuration is similar to the WCCPv1 configuration. The directives in squid.conf are slightly different but are well documented within that file. Router configuration for WCCPv2 is identical except that you must not force the router to use WCCPv1 (it defaults to WCCPv2 unless you tell it otherwise). Configuring your router There are two different methods of configuring WCCP on Cisco routers. The first method is for routers that only support V1.0 of the protocol. The second is for routers that support both. Cache/Host configuration of WCCP There are two parts to this. Firstly you need to configure Squid to talk WCCP, and additionally you need to configure your operating system to decapsulate the WCCP traffic as it comes from the router. Configuring Squid to talk WCCP The configuration directives for this are well documented in squid.conf. For WCCPv1, you need these directives: wccp_router a.b.c.d wccp_version 4 wccp_incoming_address e.f.g.h wccp_outgoing_address e.f.g.h a.b.c.d is the address of your WCCP router e.f.g.h is the address that you want your WCCP requests to come and go from. If you are not sure or have only a single IP address on your cache, do not specify these. Note: do NOT configure both the WCCPv1 directives (wccp_*) and WCCPv2 (wccp2_*) options at the same time in your squid.conf. Squid only supports configuration of one version at a time, either WCCPv1 or WCCPv2. With no configuration, the unconfigured version(s) are not enabled. Unpredictable things might happen if you configure both sets of options. 74 For WCCPv2, then you will want something like this: wccp2_router a.b.c.d wccp2_version 4 wccp2_forwarding_method 1 wccp2_return_method 1 wccp2_service standard 0 wccp2_outgoing_address e.f.g.h Use a wccp_forwarding_method and wccp2_return_method of 1 if you are using a router and GRE/WCCP tunnel, or 2 if you are using a Layer 3 switch to do the forwarding. Your wccp2_service should be set to standard 0 which is the standard HTTP redirection. a.b.c.d is the address of your WCCP router e.f.g.h is the address that you want your WCCP requests to come and go from. If you are not sure or have only a single IP address on your cache, do not specify these parameters as they are usually not needed. Now you need to read on for the details of configuring your operating system to support WCCP. Configuring FreeBSD FreeBSD first needs to be configured to receive and strip the GRE encapsulation from the packets from the router. The steps depend on your kernel version. FreeBSD 4.8 and later The operating system now comes standard with some GRE support. You need to make a kernel with the GRE code enabled: pseudo-device gre And then configure the tunnel so that the router's GRE packets are accepted: # ifconfig gre0 create # ifconfig gre0 $squid_ip $router_ip netmask 255.255.255.255 up # ifconfig gre0 tunnel $squid_ip $router_ip 75 # route delete $router_ip Alternatively, you can try it like this: ifconfig gre0 create ifconfig gre0 $squid_ip 10.20.30.40 netmask 255.255.255.255 link1 tunnel $squid_ip $router_ip up Since the WCCP/GRE tunnel is one-way, Squid never sends any packets to 10.20.30.40 and that particular address doesn't matter. FreeBSD 6.x and later FreeBSD 6.x has GRE support in kernel by default. It also supports both WCCPv1 and WCCPv2. From gre(4) manpage: "Since there is no reliable way to distinguish between WCCP versions, it should be configured manually using the link2 flag. If the link2 flag is not set (default), then WCCP version 1 is selected." The rest of configuration is just as it was in 4.8+ Standard Linux GRE Tunnel Linux versions earlier than 2.6.9 may need to be patched to support WCCP. That is why we strongly recommend you run a recent version of the Linux kernel, as if you are you simply need to modprobe the module to gain it's functionality. Ensure that the GRE code is either built as static or as a module by chosing the appropriate option in your kernel config. Then rebuild your kernel. If it is a module you will need to: modprobe ip_gre The next step is to tell Linux to establish an IP tunnel between the router and your host. ip tunnel add wccp0 mode gre remote <Router-External-IP> local <Host-IP> dev <interface> ip addr add <Host-IP>/32 dev wccp0 ip link set wccp0 up or if using the older network tools iptunnel add wccp0 mode gre remote <Router-External-IP> local <Host-IP> dev <interface> 76 ifconfig wccp0 <Host-IP> netmask 255.255.255.255 up <Router-External-IP> is the extrnal IP address of your router that is intercepting the HTTP packets. <Host-IP> is the IP address of your cache, and <interface> is the network interface that receives those packets (probably eth0). Note that WCCP is incompatible with the rp_filter function in Linux and you must disable this if enabled. If enabled any packets redirected by WCCP and intercepted by Netfilter/iptables will be silendly discarded by the TCP/IP stack due to their "unexpected" origin from the gre interface. echo 0 >/proc/sys/net/ipv4/conf/wccp0/rp_filter And then you need to tell the Linux NAT kernel to redirect incoming traffic on the wccp0 interface to Squid iptables -t nat -A PREROUTING -i wccp0 -j REDIRECT --redirect-to 3128 TProxy Interception TProxy v2.2 TProxy is a new feature in Squid-2.6 which enhances standard Interception Caching so that it further hides the presence of your cache. Normally with Interception Caching the remote server sees your cache engine as the source of the HTTP request. TProxy takes this a step further by hiding your cache engine so that the end client is seen as the source of the request (even though really they aren't). Here are some notes by StevenWilton on how to get TProxy working properly: I've got TProxy + WCCPv2 working with squid 2.6. There are a few things that need to be done: The kernel and iptables need to be patched with the tproxy patches (and the tproxy include file needs to be placed in /usr/include/linux/netfilter_ipv4/ip_tproxy.h or include/netfilter_ipv4/ip_tproxy.h in the squid src tree). The iptables rule needs to use the TPROXY target (instead of the REDIRECT target) to redirect the port 80 traffic to the proxy. ie: 77 iptables -t tproxy -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j TPROXY --on-port 80 The kernel must strip the GRE header from the incoming packets (either using the ip_wccp module, or by having a GRE tunnel set up in Linux pointing at the router (no GRE setup is required on the router)). Two WCCP services must be used, one for outgoing traffic and an inverse for return traffic from the Internet. We use the following WCCP definitions in squid.conf: wccp2_service dynamic 80 wccp2_service_info 80 protocol=tcp flags=src_ip_hash priority=240 ports=80 wccp2_service dynamic 90 wccp2_service_info 90 protocol=tcp flags=dst_ip_hash,ports_source priority=240 ports=80 It is highly recommended that the above definitions be used for the two WCCP services, otherwise things will break if you have more than one cache (specifically, you will have problems when the name of a web server resolves to multiple ip addresses). The http port that you are redirecting to must have the transparent and tproxy options enabled as follows (modify the port as appropriate): http_port 80 transparent tproxy There must be a tcp_outgoing address defined. This will need to be valid to satisfy any non-tproxied connections. On the router, you need to make sure that all traffic going to/from the customer will be processed by both WCCP rules. The way we have implemented this is to apply WCCP service 80 to all traffic coming in from a customer-facing interface, and WCCP service 90 applied to all traffic going out a customer-facing interface. We have also applied the WCCP exclude-in rule to all traffic coming in from the proxy-facing interface although this will probably not normally be necessary if all your caches have registered to the WCCP router. ie: interface GigabitEthernet0/3.100 description ADSL customers encapsulation dot1Q 502 78 ip address x.x.x.x y.y.y.y ip wccp 80 redirect in ip wccp 90 redirect out interface GigabitEthernet0/3.101 description Dialup customers encapsulation dot1Q 502 ip address x.x.x.x y.y.y.y ip wccp 80 redirect in ip wccp 90 redirect out interface GigabitEthernet0/3.102 description proxy servers encapsulation dot1Q 506 ip address x.x.x.x y.y.y.y ip wccp redirect exclude in It's highly recommended to turn httpd_accel_no_pmtu_disc on in the squid.conf. The homepage for the TProxy software is at balabit.com. TProxy v4.1+ Starting with Squid 3.1 support for TProxy is closely tied into the netfilter component of Linux kernels. see TProxy v4.1 Feature for current details. Other Configuration Examples Contributed by users who have working installations can be found in the ConfigExamples/Intercept section for most current details. If you have managed to configure your operating system to support WCCP with Squid please contact us or add the details to this wiki so that others may benefit. 79 Complete By now if you have followed the documentation you should have a working Interception Caching system. Verify this by unconfiguring any proxy settings in your browser and surfing out through your system. You should see entries appearing in your access.log for the sites you are visiting in your browser. If your system does not work as you would expect, you will want to read on to our troubleshooting section below. Troubleshooting and Questions It doesn't work. How do I debug it? Start by testing your cache. Check to make sure you have configured Squid with the right configure options - squid -v will tell you what options Squid was configured with. Can you manually configure your browser to talk to the proxy port? If not, you most likely have a proxy configuration problem. Have you tried unloading ALL firewall rules on your cache and/or the inside address of your network device to see if that helps? If your router or cache are inadvertently blocking or dropping either the WCCP control traffic or the GRE, things won't work. If you are using WCCP on a cisco router or switch, is the router seeing your cache? Use the command show ip wccp web-cache detail Look in your logs both in Squid (cache.log), and on your router/switch where a show log will likely tell you if it has detected your cache engine registering. On your Squid cache, set debug_options ALL,1 80,3 or for even more detail debug_options ALL,1 80,5 . The output of this will be in your cache.log. On your cisco router, turn on WCCP debugging: router#term mon router#debug ip wccp events WCCP events debugging is on router#debug ip wccp packets WCCP packet info debugging is on 80 router# !Do not forget to turn this off after you have finished your debugging session as this imposes a performance hit on your router. Run tcpdump or ethereal on your cache interface and look at the traffic, try and figure out what is going on. You should be seeing UDP packets to and from port 2048 and GRE encapsulated traffic with TCP inside it. If you are seeing messages about "protocol not supported" or "invalid protocol", then your GRE or WCCP module is not loaded, and your cache is rejecting the traffic because it does not know what to do with it. Have you configured both wccp_ and wccp2_ options? You should only configure one or the other and NOT BOTH. The most common problem people have is that the router and cache are talking to each other and traffic is being redirected from the router but the traffic decapsulation process is either broken or (as is almost always the case) misconfigured. This is often a case of your traffic rewriting rules on your cache not being applied correctly (see section 2 above - Getting your traffic to the right port on your Squid Cache). Run the most recent General Deployment (GD) release of the software train you have on your router or switch. Broken IOS's can also result in broken redirection. A known good version of IOS for routers with no apparent WCCP breakage is 12.3(7)T12. There was extensive damage to WCCP in 12.3(8)T up to and including early 12.4(x) releases. 12.4(8) is known to work fine as long as you are not doing ip firewall inspection on the interface where your cache is located. If none of these steps yield any useful clues, post the vital information including the versions of your router, proxy, operating system, your traffic redirection rules, debugging output and any other things you have tried to the squid-users mailing list. Why can't I use authentication together with interception proxying? Interception Proxying works by having an active agent (the proxy) where there should be none. The browser is not expecting it to be there, and it's for all effects and purposes being cheated or, at best, confused. As an user of that browser, I would require it not to give away any credentials to an unexpected party, wouldn't you agree? Especially so when the user-agent can do so without notifying the user, like Microsoft browsers can do when the proxy offers any of the Microsoftdesigned authentication schemes such as NTLM (see ../ProxyAuthentication and Features/NegotiateAuthentication). In other words, it's not a squid bug, but a browser security feature. 81 Can I use ''proxy_auth'' with interception? No, you cannot. See the answer to the previous question. With interception proxying, the client thinks it is talking to an origin server and would never send the Proxy-authorization request header. "Connection reset by peer" and Cisco policy routing Fyodor has tracked down the cause of unusual "connection reset by peer" messages when using Cisco policy routing to hijack HTTP requests. When the network link between router and the cache goes down for just a moment, the packets that are supposed to be redirected are instead sent out the default route. If this happens, a TCP ACK from the client host may be sent to the origin server, instead of being diverted to the cache. The origin server, upon receiving an unexpected ACK packet, sends a TCP RESET back to the client, which aborts the client's request. To work around this problem, you can install a static route to the null0 interface for the cache address with a higher metric (lower precedence), such as 250. Then, when the link goes down, packets from the client just get dropped instead of sent out the default route. For example, if 1.2.3.4 is the IP address of your Squid cache, you may add: ip route 1.2.3.4 255.255.255.255 Null0 250 This appears to cause the correct behaviour. Further information about configuring Interception Caching with Squid ReubenFarrelly has written a fairly comprehensive but somewhat incomplete guide to configuring WCCP with cisco routers on his website. You can find it at www.reub.net. has written an O'Reilly book about Web Caching which is an invaluable reference guide for Squid (and in fact non-Squid) cache administrators. A sample chapter on "Interception Proxying and Caching" from his book is up online, at http://www.oreilly.com/catalog/webcaching/chapter/ch05.html. DuaneWessels 82 Issues with HotMail Hotmail has been known to suffer from the HTTP/1.1 Transfer Encoding problem. see http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/ for more details on that and some solutions. Back to the SquidFaq SquidFaq/InterceptionProxy (last edited 2010-02-19 10:47:40 by FrancescoChemolli) About Forum Howtos & FAQs Low graphics Shell Scripts RSS/Feed nixcraft - insight into linux admin work Linux: Setup a transparent proxy with Squid in three easy steps by LinuxTitli · 234 comments Y'day I got a chance to play with Squid and iptables. My job was simple : Setup Squid proxy as a transparent server. Main benefit of setting transparent proxy is you do not have to setup up individual browsers to work with proxies. 83 My Setup: i) System: HP dual Xeon CPU system with 8 GB RAM (good for squid). ii) Eth0: IP:192.168.1.1 iii) Eth1: IP: 192.168.2.1 (192.168.2.0/24 network (around 150 windows XP systems)) iv) OS: Red Hat Enterprise Linux 4.0 (Following instruction should work with Debian and all other Linux distros) Eth0 connected to internet and eth1 connected to local lan i.e. system act as router. Server Configuration Step #1 : Squid configuration so that it will act as a transparent proxy Step #2 : Iptables configuration o a) Configure system as router o b) Forward all http requests to 3128 (DNAT) Step #3: Run scripts and start squid service First, Squid server installed (use up2date squid) and configured by adding following directives to file: # vi /etc/squid/squid.conf Modify or add following squid directives: httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on acl lan src 192.168.1.1 192.168.2.0/24 http_access allow localhost http_access allow lan Where, httpd_accel_host virtual: Squid as an httpd accelerator httpd_accel_port 80: 80 is port you want to act as a proxy httpd_accel_with_proxy on: Squid act as both a local httpd accelerator and as a proxy. 84 httpd_accel_uses_host_header on: Header is turned on which is the hostname from the URL. acl lan src 192.168.1.1 192.168.2.0/24: Access control list, only allow LAN computers to use squid http_access allow localhost: Squid access to LAN and localhost ACL only http_access allow lan: -- same as above -- Here is the complete listing of squid.conf for your reference (grep will remove all comments and sed will remove all empty lines, thanks to David Klein for quick hint ): # grep -v "^#" /etc/squid/squid.conf | sed -e '/^$/d' OR, try out sed (thanks to kotnik for small sed trick) # cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d' Output: hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY hosts_file /etc/hosts refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl purge method PURGE acl CONNECT method CONNECT cache_mem 1024 MB http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports acl lan src 192.168.1.1 192.168.2.0/24 http_access allow localhost http_access allow lan http_access deny all http_reply_access allow all 85 icp_access allow all visible_hostname myclient.hostname.com httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on coredump_dir /var/spool/squid Iptables configuration Next, I had added following rules to forward all http requests (coming to port 80) to the Squid server port 3128 : iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT -to 192.168.1.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 Here is complete shell script. Script first configure Linux system as router and forwards all http request to port 3128 (Download the fw.proxy shell script): #!/bin/sh # squid server IP SQUID_SERVER="192.168.1.1" # Interface connected to Internet INTERNET="eth0" # Interface connected to LAN LAN_IN="eth1" # Squid port SQUID_PORT="3128" # DO NOT MODIFY BELOW # Clean old firewall iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X # Load IPTABLES modules for NAT and IP conntrack support modprobe ip_conntrack modprobe ip_conntrack_ftp # For win xp ftp client #modprobe ip_nat_ftp 86 echo 1 > /proc/sys/net/ipv4/ip_forward # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT ACCEPT # Unlimited access to loop back iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow UDP, DNS and Passive FTP iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT # set this system as a router for Rest of LAN iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT # unlimited access to LAN iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT # DNAT port 80 request comming from LAN systems to squid 3128 ($SQUID_PORT) aka transparent proxy iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT # if it is same system iptables -t nat -A PREROUTING -i $INTERNET -p tcp --dport 80 -j REDIRECT --to-port $SQUID_PORT # DROP everything and Log it iptables -A INPUT -j LOG iptables -A INPUT -j DROP Save shell script. Execute script so that system will act as a router and forward the ports: # # # # chmod +x /etc/fw.proxy /etc/fw.proxy service iptables save chkconfig iptables on Start or Restart the squid: # /etc/init.d/squid restart # chkconfig squid on 87 Desktop / Client computer configuration Point all desktop clients to your eth1 IP address (192.168.2.1) as Router/Gateway (use DHCP to distribute this information). You do not have to setup up individual browsers to work with proxies. How do I test my squid proxy is working correctly? See access log file /var/log/squid/access.log: # tail -f /var/log/squid/access.log Above command will monitor all incoming request and log them to /var/log/squid/access_log file. Now if somebody accessing a website through browser, squid will log information. Problems and solutions (a) Windows XP FTP Client All Desktop client FTP session request ended with an error: Illegal PORT command. I had loaded the ip_nat_ftp kernel module. Just type the following command press Enter and voila! # modprobe ip_nat_ftp Please note that modprobe command is already added to a shell script (above). (b) Port 443 redirection I had block out all connection request from our router settings except for our proxy (192.168.1.1) server. So all ports including 443 (https/ssl) request denied. You cannot redirect port 443, from debian mailing list, "Long answer: SSL is specifically designed to prevent "man in the middle" attacks, and setting up squid in such a way would be the same as such a "man in the middle" attack. You might be able to successfully achive this, but not without breaking the encryption and certification that is the point behind SSL". Therefore, I had quickly reopen port 443 (router firewall) for all my LAN computers and problem was solved. 88 (c) Squid Proxy authentication in a transparent mode You cannot use Squid authentication with a transparently intercepting proxy. Further reading: How do I use Iptables connection tracking feature? How do I build a Simple Linux Firewall for DSL/Dial-up connection? Update: Forum topic discussion: Setting up a transparent proxy with Squid peering to ISP squid server Squid, a user's guide Squid FAQ Transparent Proxy with Linux and Squid mini-HOWTO Updated for accuracy. Featured Articles: 20 Linux System Monitoring Tools Every SysAdmin Should Know 20 Linux Server Hardening Security Tips My 10 UNIX Command Line Mistakes The Novice Guide To Buying A Linux Laptop 10 Greatest Open Source Software Of 2009 Top 5 Email Client For Linux, Mac OS X, and Windows Users Top 20 OpenSSH Server Best Security Practices Top 10 Open Source Web-Based Project Management Software Top 5 Linux Video Editor Software 4000+ howtos and counting! Want to read more Linux / UNIX howtos, tips and tricks? We request you to sign up for the Daily email newsletter or Weekly newsletter and get intimated about our new howtos / faqs as soon as it is released. you@address Nixcraft-LinuxFre en_US Subscribe 89 Email this to a friend Download PDF version Printable version Comment RSS feed Last Updated: Dec/5/07 { 234 comments… read them below or add one } 1 Jay of Today May 27, 2006 you gotta be kidding, only 150 desktops and 8 gigs of RAM??????? I use to have p133 with 64megs with that setup way back then!!! bah, newschoolers SUCKS Reply 2 LinuxTitli May 27, 2006 LOL :D 8GB gives you the best performance. Squid performance = more ram + fast SCSI disk Cost of RAM : Yet another reason or factor to have a more ram. Even people started to use desktop system with 1GiB:P Reply 3 kotnik May 27, 2006 Use following sed magic to remove both comments and empty lines at the same expense: sed ‘/ *#/d; /^ *$/d’ Reply 4 LinuxTitli May 27, 2006 kotnik, Nice sed trick, no need to use grep :) Appreciate your post. Reply 90 5 Aaron May 28, 2006 Hi, I have similar setup, only one question, How do I block Yahoo and MSN messengers (block at router or transparent proxy+iptables level) ? Cheers, Aaron Reply 6 LinuxTitli May 28, 2006 Aaron, My firewall policy @ router: Default firewall Policy: Close all door and open only required windows Block all incoming and outgoing request Open only required ports i.e. 80 (from proxy only) , 443, 21, 22, 25 etc as per requirement. This configuration automatically blocks rest of stuff. You can implement similar policy using Squid ACL or iptables. Reply 7 Scott May 29, 2006 Nice, quick, down and dirty article. :-) Aaron: http://www.mail-archive.com/[email protected]/msg38193.html will explain how to block Yahoo, MSN and other IM’s. For anyone interested, I have thrown together a HOWTO on getting Squid to work properly in conjunction with Active Directory authentication. It can be found here: http://cryptoresync.com/2006/05/18/installing-squid-withactive-directory-authentication/ Enjoy! Reply 8 Bill May 29, 2006 Aaron, 91 My findings with chat networks like AIM is that, even if you block the specific ports used by the network (ie, 5190), the login server will accept connections to other ports that are common, such as 80, 25, 443, 23, etc. Your best bet for blocking chat traffic is to block the ports used by the network, as well as the IP addresses associated with the login servers, like login.oscar.aol.com. Additionally, write your internal routing rules such that only traffic passing through your proxy can reach the Internet. Otherwise, users will be able to circumvent your proxy and use a public proxy. Reply 9 Desert Zarzamora May 29, 2006 Sometime ago, i wrote another how-to, but this time for a COMPLETELY transparent proxy. That is, a bridged proxy. That a bit more esoteric stuff, but very useful if you really can’t mess with your network topology. Have a look at: http://freshmeat.net/articles/view/1433/ Reply 10 Hans May 29, 2006 I would love to run into your office, replace your server with a Pentium 200 with 128mb of RAM… you probably wouldn’t notice the difference, if all you are using it is for squid. then I would actually make some good use of the machine. I’ve got a pentium 200 doing far more (proper proxy, apache server, svn, samba, etc etc) and handles it perfectly well ??? Reply 11 LinuxTitli May 29, 2006 @Desert Zarzamora and Scott, nice tutorial (thanks for links) @Hans, heh Well to be frank I am just admin and decision regarding h/w or infrastructure made by someone else … this is how things work in an enterprise IT division (they don’t care about money as they also make more money from core business so they want world class stuff). However, I agree with you about h/w requirement can be low to run other services. @Bill, Good advice there. 92 Appreciate all of yours post and feedback :) Reply 12 Steve May 30, 2006 just wondering do wew really need quid acting as an accelerator here? nice article, and what a beast of a proxy server i think everyone else is just jealous cos they only have p1′s Reply 13 ADHDPHP June 1, 2006 Thanks LinuxTitli!!! I really appreciate you sharing your knoledge with others! Keep up the great work! KMC Reply 14 ADHDPHP June 1, 2006 Also, LinuxTitli do you have any need to use dansguardian in conjuntion with squid for conent filtering? That would probably make good use of that RAM too! Thanks again! Reply 15 massage therapy products June 1, 2006 Well, I’ll be needing to set one of these up eventually, so you’re bookmarked. I wonder how performance would be if I set up a RAID system on USB drives… Reply 16 avanish June 1, 2006 how we can config the ftp service in squid proxy reply avanish gupta india Reply 93 17 Vivek June 1, 2006 Avanish, Add following line to config file acl ftp proto FTP http_access allow ftp If clients compters are using IE browser then Goto > Tools > Advance > and Uncheck option that reads Enable folder view for FTP-Sites. FTP proxy only work through browser and it will not work at command line. Remember squid is not a real ftp proxy. Reply 18 nesargha June 2, 2006 thank you, i had little bit problems in running the script on redhat 9 , i had remove the $lan_in etc.. and type the actual values but at last i worked fine with me nesargha india Reply 19 Aaron P June 4, 2006 Using squid transparently, you lose the ability to authenticate users (bummer). While I can understand why (to a certain degree), is there a way to just get the username for logging purposes? It’s like I’m up a (little river) without a (rowing device). I need squid for logging user hits, but I can’t do it without transparent routing. And I can’t authenticate in transparent mode due to the accelerator. Any ideas? Awesome article. Thanks! AP Reply 20 Vivek June 4, 2006 @Aaron, 94 Simple answer is you cannot do both things (transparent proxy + auth). The browser has no way of knowing it is using a proxy. So, what you can do is use automatic URL configuration (i.e. no transparent proxy) with WPAD. The information for WAPD and automatic URL configuration available at official Squid FAQ: http://www.squid-cache.org/Doc/FAQ/FAQ-5.html If you find any other way then let us know… Hope this helps. @nesargha, May be because of html formatting… I will upload script as a text file so that others can use it directly (but you still need to make changes to script) Reply 21 Martin Wallace June 17, 2006 I am just a newbie, but I think there’s an error in your configuration of iptables. The lines should read : iptables -t nat -A PREROUTING -i eth1 -p tcp -–dport 80 -j DNAT -–to 192.168.1.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp –-dport 80 -j REDIRECT –to-port 3128 That is, you need –, not -, before to, to-port and dport. Correct me if I’m wrong. Martin Reply 22 Martin Wallace June 17, 2006 I see that the problem is with formatting. You need two dashes, not one, before to, to-port and dport, but they look like one (slightly longer) dasjh onm my screen. Try again: iptables -t nat -A PREROUTING -i eth1 -p tcp – –dport 80 -j DNAT – –to 192.168.1.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp – –dport 80 -j REDIRECT – –to-port 3128 Reply 95 23 vivek June 17, 2006 Martin, I just checked the script. There is no problem. However, it looks like, HTML formatting breaks the script. Direct link to download script: http://www.cyberciti.biz/tips/wp-content/uploads/2006/06/fw.proxy.txt Hope this helps :) Reply 24 sohan July 12, 2006 i am using same rules given above , Can I block my users to use public proxy. Do i have to modify my squid.conf or Iptables Reply 25 nixcraft July 12, 2006 sohan, You just need to setup LAN ACL. If you are using above config then it only allows access from LAN. Reply 26 WebSean July 30, 2006 I am running Squid 2.5 on Macintosh OS X (10.3.7) with the handy “SquidMan” port for OS X / Darwin and it works great. The interface does allow me to make the httpd_accel_… modifications to the squid.conf file for transparent proxying, but how do I set-up the iptables step? My system uses ipfw instead and I have tried “sudo ipfw add 1000 fwd 127.0.0.1,8080 tcp from any to any 80″ only to see my port 80 malfunction. How can I configure the port 80 hijack/redirect function to get transparency working on OS X? Thanks in advance. Reply 27 tony September 6, 2010 WebSean, Did you ever get a reply back? I have similar setup browser->dansguardian->squid->internet and I’m using ipfw Can’t seem to get transparent working. Meaning redirecting requests coming to port 80 to dansguardian port 8080 96 I’ve tried all and each with different combinations of the following below in my ipfw ruleset – nothing works ..just goes straight to internet ..bypasses dansguadian completely ${IPF} add 01000 allow tcp from me to any 80 out via $EXT_INT setup $KS #${IPF} add 01007 fwd 127.0.0.1,8883 tcp from any to any 80 #ipfw add 50 fwd 127.0.0.1 tcp from any to any 80 #${IPF} add 01006 allow tcp from 127.0.0.1 to any 80 #${IPF} add 01007 fwd 127.0.0.1,8883 tcp from any to any 80 in recv $EXT_INT #${IPF} add 01007 fwd 127.0.0.1,8883 tcp from any to any 80 #${IPF} add 01007 fwd 127.0.0.1,8883 tcp from me to any 80 via $EXT_INT #${IPF} add 01008 allow tcp from me to any 80 out xmit lo0 #${IPF} add 01009 allow tcp from any 80 to me in recv lo0 established ${IPF} add 7000 fwd 127.0.0.1,8883 tcp from any to any 80 my squid.conf looks like this http_port 127.0.0.1:3333 transparent because that is what squid 3.1.7 version all needs Reply 28 tony September 6, 2010 WebSean, Did you ever get a reply back? I have similar setup browser->dansguardian->squid->internet and I’m using ipfw Can’t seem to get transparent working. Meaning redirecting requests coming to port 80 to dansguardian port 8883 I’ve tried all and each with different combinations of the following below in my ipfw ruleset – nothing works ..just goes straight to internet ..bypasses dansguadian completely ${IPF} add 01000 allow tcp from me to any 80 out via $EXT_INT setup $KS #${IPF} add 01007 fwd 127.0.0.1,8883 tcp from any to any 80 #ipfw add 50 fwd 127.0.0.1 tcp from any to any 80 #${IPF} add 01006 allow tcp from 127.0.0.1 to any 80 97 #${IPF} add 01007 fwd 127.0.0.1,8883 tcp from any to any 80 in recv $EXT_INT #${IPF} add 01007 fwd 127.0.0.1,8883 tcp from any to any 80 #${IPF} add 01007 fwd 127.0.0.1,8883 tcp from me to any 80 via $EXT_INT #${IPF} add 01008 allow tcp from me to any 80 out xmit lo0 #${IPF} add 01009 allow tcp from any 80 to me in recv lo0 established ${IPF} add 7000 fwd 127.0.0.1,8883 tcp from any to any 80 my squid.conf looks like this http_port 127.0.0.1:3333 transparent because that is what squid 3.1.7 version all needs Reply 29 Emre October 2, 2006 To not to see both empty lines and remarks grep can be used in this way; grep -Ev “^$|^#” /etc/squid/squid.conf Reply 30 Praveen October 29, 2006 Hi, Is it possible to retain public Ip address, while using squid, All pc in my lan having public ip address. I want to use squid. But whenever i use transparent squid, the outgoing packet keeps squid server’s ip as source ip address. how can i use squid httpd_accel without proxy. Reply 31 nixcraft October 29, 2006 The whole point of using transparent proxy/NAT is to hide internal IP address. As long as you have squid in between internet and other boxes anyone will see your squid ip address Reply 32 karthick November 11, 2006 dear, 98 cyberciti guys,thank you very very mush.because your web site is good food for linux hungry peoples. Contineue yours job with god’s blassings. By, Your’s S.Karthick Reply 33 Marlon November 15, 2006 Hi guys, I ask something about my firewall-squid-dhcp server in one box, i have eth0 for internet-connection and eth1 for local-connection…i want to do is, to be transparent proxy all clients connected at eth1 local-connection. Could you provide me the minimal config of iptables/squid.conf to make work as a transparent proxy my all-in-one linux box. i want the minimal config of iptables without filtering temporary. Thanks! Reply 34 nixcraft November 15, 2006 Squid config remains the same. Only iptables will changes. Type following at command prompt to get started temporary: iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to 192.168.1.1:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT --to-port 3128 Replace 192.168.1.1 with your actual Linux server IP address (local LAN IP) Reply 35 Jaimohan November 17, 2006 Dear friends, can i run the VPN-Checkpoint software with squid using transparent proxying, please reply asap Regrds Jai 99 Reply 36 nixcraft November 17, 2006 Yes you can as long as everything is configured you should able to use VPN with any other internet service Reply 37 Mimbari November 24, 2006 For a “completely totally” transparent proxy, use http://www.balabit.com/downloads/tproxy/linux-2.6/ That way the client IP address will be used by the Squid, still caching etc too. Needs inbound routing of reply server traffic to be routed back through the Squid box though. It’s kernel & iptables patching only, yielding the tproxy iptables table.. In Valen’s Name. Reply 38 neddy November 27, 2006 Hi there, i have a few questions… 1) will this proxy things such as steam games / downloads, Microsoft updates, anti-virus updates and other things that do not run on port 80? 2) The proxy appears to work, and i have set my ip address to it, but if i download a 10mb file, then download the same file on another pc, the speeds are still slow, indicating that the proxy may not be working… when i run: “tail -f /var/log/squid/access.log” i get the log to screen & file, and it is showing that there is data being proxied, but everything still runs ‘slow’ 3) I am running it on public ip addresses, one for the eth0 (internet) 203.16.209.x and the second ip address for the people using the proxy is eth1 (lan) 203.221.91.x the proxy all works, but could this be why it is running slow? - cheers Reply 39 nixcraft November 27, 2006 Neddy , 100 Yes everything should work as long as remote site is using port 80 for downloading updates and patches. If you need to cache larger file you need to enable cache object size. Default is 4 MB. However it is not recommended to use such large cache object size until and unless you have monster cache server (normally ISP enables large cache object). You need to tune out your squid for this. The defaults are good to improve overall user experience. Proxy should work fast. Make sure you have correct DNS server setup. Try to use OpenDNS server http://opendns.com/ HTH. Reply 40 woodsturtle November 29, 2006 I am having trouble accessing an MS sharepoint server through squid 2.6 configured in transparent proxy mode. Everything that I have read so far suggest that I must bypass squid althogether because of the NTLM authentication require to access share point. Is this the case? Also, what is the iptables statement which I should use before the DNAT statement? I am using wccp and have created a GRE tunnel on the squid box. Reply 41 Hernan November 29, 2006 Excelent guide, It work forme. Thanks. Now I{m working on acl that let a few machines acces msn. Reply 42 woodsturtle November 29, 2006 What guide are you referring to? Reply 43 ReMSiS December 12, 2006 Hello, Really the guide is wonderful and it worked 100% for me and even the clients using it are amazed with its speed. But there is one problem now !!! How can we access mail, i.e: Clients using outlook are not enabled to send and recieve mail because the ports is blocked or it is not able to make 101 resolution to the mail server. How can I make the mail work too ? because now only http is working pop3 and smtp is not !!! how can I do that ? Regards, Reply 44 nixcraft December 12, 2006 I think your topic is already answered @ our forum. Reply 45 ReMSiS December 13, 2006 Yes nixcraft answered but still not working right, the script yesterday worked now its not !!! I maybe going crazy… Reply 46 sohan January 2, 2007 I have installed Squid-2.4 on Red Hat Linux enterprise 4 2 Public IPs are available from 2 different ISPs. Now I want to configure Squid so as to apportion traffic among the IPs by destination (external) IP and by source (internal) IP. The aim is to give complete bandwidth available from one ISP to one set of users for thier access to specific URLs. Is there any way to do the same in Squid ? Reply 47 sohan January 2, 2007 Hi All I want to put quota limit on Squid for users. I want to limit users for specific data limit like If i want to allow users to consume on 4 GB Data through Squid then what i need to do. Is there any additional tool for squid to do this or squid can do this also ? If anybody have solution for this please let me know. thanks Reply 48 Raghuram January 31, 2007 102 Hi, Nice tut. Just what I wanted for an education facility of 45 machines. Have a 2Mbps ADSL connection which I want to share across the LAN. This is my first time with squid. One doubt – my lan ip (eth1) is DHCP driven while eth0 (internet facing) has a static IP. In this case, will squid work? thanks. Reply 49 raghu January 31, 2007 will squid work with DHCP aasigned eth0 and static Ip eth1? Nie tuttorial.thanks Reply 50 nixcraft February 1, 2007 raghu, You can use Squid with DHCP assigned IP Reply 51 Marco A. Barragan February 7, 2007 All this not work for 2.6, in the case of using: http_port x.x.x.x:xx vhost transparent or any combination, the message is “Can’t use transparent and cache in the same port”, if you try to use the cache_peer command, appear an error FATAL: Bundle in line x: cache_peer … So, now you can’t use the server for caching and proxy at the same time :S Reply 52 nixcraft February 7, 2007 #1: You cannot set proxy and transparent http on same port. @2: There is some discussion going on about cache peering @ our forum. HTH Reply 53 Clay February 8, 2007 103 I’m trying to setup squid transparently on a box that has one network interface, but is plugged into a hub between the Internet connection and the switch that the clients are on. (I realize this is not ideal, but it’s what I have to work with.) Can anyone point me in the right direction? Reply 54 rakesh February 9, 2007 sir well i have one problem, i am one system with two ether lan card one connected to Public ip and another with local network. what i want is if any exterbal client send an request on port 80, that request should be redirect to my local DNS. how can it be possible. another thing i have two domain mydomain.com (local) and another http://www.com (internet). now if any client request to http://www.com it request should be redirect to mydomain.com. can it be possible, if possible plz send me the solution Reply 55 raghu February 11, 2007 Hi vivek, Can squid be set up on a machine different from the internet gateway machine? I have a DHCP (FC5) server on which I want to set up squid. My internet gateway (ADSL) machine runs Windows Xp and I don’t want to disturb it. Thanks. Reply 56 Marco A. Barragan February 17, 2007 But how i can configure it? any idea? how to activate the cache for my network? any can help me to make the right stuff? I’m redirecting the port 80 to 3128 with iptables (old style squid) and using this: http_port 10.42.0.1:3128 transparent half_closed_clients on visible_hostname 201.234.228.139 coredump_dir /var/spool/squid 104 Where 10.42.0.1 is the network interface (eth0) conected to lan, and eth1 is the Wan lan. I want make the cahce for my users with squid, and also using proxy, but i can’t go to every client to configure proxy setting, need transparent, and cache, i try all, i use this: http_port 10.42.0.1:3128 transparent cache_peer 127.0.0.1 parent 3128 3130 originserver half_closed_clients on visible_hostname 201.234.228.139 coredump_dir /var/spool/squid Not work, use all “arrows” that i imagine and noting, can any explain me how to do it? Really thanks a lot for any help. Reply 57 Siva February 19, 2007 how to control my bandwidth using squid proxy Reply 58 Marco A. Barragan February 21, 2007 for bandwidth you can use this: first step configure how many delay pools you going to use, for example if you have 2 types of users (one with big badwidth and others with low bandwidth) you need put this: delay_pools n, in our exaple: delay_pools 2 then you need define the class of bandwidth, there are 3 types, 1, 2, 3, in our example we use the class 1 and 2, for unlimited general and the restricted: delay_class 1 1 delay_class 2 2 then use the parameter to define the velocity, remember, if you want 128 kbps, you need multiply it for 128 to convert to bps: delay_parameters 1 -1/-1 delay_parameters 2 -1/-1 16384/57600 -1 means unlimited second is for 128 and boost of 450 105 last step is defining the acl, in my case: acl localhost src 127.0.0.1/255.255.255.255 acl clientes src 10.42.100.0/255.255.255.0 acl limitados src 10.42.99.0/255.255.255.0 delay_access 1 allow clientes localhost !limitados delay_access 2 allow limitados delay_access 1 deny all delay_access 2 deny all Dunno if is correct but is an example, you can investigate more. Reply 59 bitou February 26, 2007 This fw.proxy is to be started every time the computer is started, manually. Then only transparent proxy will work.Is there a method to do it automatically , so that the script is executed on start up even without the need of the user to log in. Regards Reply 60 nixcraft February 26, 2007 bitou, If you are using RedHat/CentOS/FC Linux type: service iptables save chkconfig iptables on If you are using Debian/Ubuntu Linux read this Reply 61 Coders2020 March 7, 2007 In the past I had serious problems with configuring squid on my local network. I am alrady under university firewall/proxy. Can I configure proxy under proxy(I know it has no pracktical use but just asking for testing purpose) ? Reply 62 Prabir Das March 19, 2007 its good education packeg to us 106 Reply 63 Prashant Soni March 20, 2007 Hi, My name is Prashant. I am Sr.Network Engineer in an ISP. I would like to put a transparent proxy with bridge between our local networks and Internet. I’d tryinn to configure squid transparent proxy with bridge couple of times, but yet not successful. I am explaining the scenario and hope somebody will help me. SCENARIO : We have 2 ip pools in our networks. 1. 128.0.0.0/18 (fake ip) 2. 59.x.x.96/27 (real ip) 3. 59.x.x.0/27 (Real IP Used in internetwork) We have one mikrotik master router from which both network goes to the radware(which is load balancer and using internetwork ip listed in a cisco). Now I want to put squid between mikrotik and radware (loadbalancer) In my network nobody uses authentications so not needed. When, I configured the squid with trasparent proxy in bridge mod, sometimes it gives me acl errors. But when I changed in squid.conf “access_allow all” , no error comes but page is not loading till done. With this settings I can ping , traceroute to the internet from client addresses also but page is not loading. I’ve done all configuration as stated in below link : http://freshmeat.net/articles/view/1433/ Please guide me regarding this matter. Regards, Prashant Reply 64 Nandkishor March 27, 2007 107 Hi, I have configured the DHCP server using ES Linux-4 .It having 2 ethernet cards. eth0 is used dhcp (Lan) & eth1 is connceted to Internet. eth0 using IP 192.x.x.x Netmask 255.255.255.0 Gateway 59.x.x.x (this is IP of eth1) eth1 using Ip 59.x.x.x Netmask 255.255.255.240 Gateway 59.x.x.129 Client M/c’s ping to IP of eth0, also ping to gateway of eth0 & ip of eth1. But not able to ping Gateway of eth1-59.x.x.129 so they are not able to connect to the internet. So plz give me the solution for this. Reply 65 Nandkishor March 30, 2007 Hi, I have configured the transperant proxy with dhcp server. How I block the files for downloading like *.dll & *.mp3 &*.mp4 etc. for a specific time. Reply 66 nixcraft March 30, 2007 Nandkishor, Please see this article Reply 67 xaviero March 30, 2007 how about if i use another PC for router & gateway, then use another PC (SLES installed) just for transparent proxy (DMZ). the proxy already worked, but its not transparent. what should i do with the iptable ? advice plz Reply 68 Nandkishor April 3, 2007 Hi, I have configured the many virtual hosts at one server and added same big 108 file in that all virtual hosts. But because of this big file more size is required. So it is posible to me create one folder on that server, put that file & give the path of that folder in the all virtual hosts. But How it is possible? Plz give me the solution for this. Reply 69 Nandkishor April 3, 2007 Hi, I have see the article for blocking of the .dll, .mp3 ,mp4, .exe & many files downloades, & do the configuration. But this is not working to block the files downloading. Plz give me the solution for this. Reply 70 Gurpinder Singh April 7, 2007 hello everybody i want to configure a squid server on fedora core 5. i want to that range of ip address is 192.168.1.1 – 192.168.1.60, and 192.168.1.101192.168.1.160 . internet is running on this client machines. not running internet on others ip address i.e 192.168.1.61 – 192.168.1.100. please urgent reply me on my mail address. Gurpinder Singh Reply 71 Alex Ling April 10, 2007 Hi all i would like to know how to forward HTTP request to others proxy (like privoxy). Thanks. Reply 72 mark April 26, 2007 Good day. I’m currently running squid 2.5 on my centOS server… I needed authentication for my users before accessing the internet (80, 21, 443, etc) so I configured it correspondingly. However, one of my clients needs to access an ftp server which enforces a username and password 109 authentication. Squid tries to connect using an anonymous user rather than prompting for a password… My question being: How could I enable user authentication to public ftp servers if my machine is behind a squid proxy server? I’d appreciate your best effort. Thanks in advance. Reply 73 pankaj chauhan April 28, 2007 hello every body, i have a squid proxy server my server ip is 192.168.0.1 my client ip is 192.168.0.2 to 192.168.0.240 internet is working proper on client can it possible that first 30 client (192.168.0.2-192.168.0.30) get more bandwith than rest client plz told me wat change will do on squid.conf file for it. Reply 74 Tapan May 3, 2007 how to prevent bypassing sarg and dansguardian Reply 75 tushar May 9, 2007 Hi All My name is tushar and i want to make proejct on squid proxy server, because I want to submit the complet project on squid proxy server. Thanks. Tushar Raut Reply 76 Frank May 10, 2007 Is there any indication to use some sort of virus/malware filter in this setup, aka, HAVP – HTTP. http://www.server-side.de/ Cheers! Frank Reply 77 chandrakant May 24, 2007 110 Hi Thanks for the fw.proxy file. after enableing this file i’m able to run my system as router and proxy server. But after restart server I’m reciveing so many logs messages. Please have look and tel me how can block them. Due to this my server responding slovely… System log:May 24 12:45:06 pune dbus: Can’t send to audit system: USER_AVC pid=2658 uid=81 loginuid=-1 message=avc: denied { send_msg } for scontext=root:system_r:unconfined_t tcontext=user_u:system_r:initrc_t tclass=dbus May 24 11:28:21 pune kernel: IN=eth0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:14:85:64:d7:3e:08:00 SRC=10.20.204.70 DST=10.20.255.255 LEN=78 TOS=0×00 PREC=0×00 TTL=128 ID=29613 PROTO=UDP SPT=137 DPT=137 LEN=58 May 24 11:28:22 pune kernel: IN=eth0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:14:85:64:d7:3e:08:00 SRC=10.20.204.70 DST=10.20.255.255 LEN=78 TOS=0×00 PREC=0×00 TTL=128 ID=29615 PROTO=UDP SPT=137 DPT=137 LEN=58 May 24 11:28:23 pune kernel: IN=eth0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:14:85:64:d7:3e:08:00 SRC=10.20.204.70 DST=10.20.255.255 LEN=78 TOS=0×00 PREC=0×00 TTL=128 ID=29616 PROTO=UDP SPT=137 DPT=137 LEN=58 Regards, Chandrakant Reply 78 csbot May 24, 2007 chandrakant, Remove last line: iptables -A INPUT -j LOG BTW, log will not slow down your server. Reply 79 cedric May 27, 2007 111 your instructions work good but i can’t connect to my network printer and another server on my lan. also having problem setting up static ip for eth0. i followed the instruction from the link you gave. i tried to do it several times and always had to go back to using dhcp. i need some help and what gateway would i use for eth0? Reply 80 Chandrakant May 31, 2007 Hi, One more problem i am facing with above configuration. I am not able to use web access of exchange 2003 server. and office scan http url can any buddy help me resolve this. Chandrakant Reply 81 bhupesh karankar June 1, 2007 Hello Friend, i am bhupesh karankar, i have problem in squid. as above, i have implement squid in my server. but still my client not able to access mail via outlook with squid. wating for ur reply i have same configuration as above. wating for ur reply, need help Bhupesh Karankar [email protected] 0998110488 Reply 82 Brent June 1, 2007 Thanks for posting the transparent proxy script. It works very well. I like the way you choose to close everything and only open what you need. I do need to open a few ports, like https (443) and possibly one or two more (ssh). Can you post how you would do this? Thanks. Reply 112 83 vivek June 1, 2007 Find line # DROP everything and Log it Add your iptables rules before that line. Remember you must deal with eth0 and eth1, otherwise you will create a new security issue. Reply 84 bhupesh karankar June 2, 2007 hello, this is nice script. but when i use this, it blocked smb and squid and my web server, what to do. wating for reply [email protected] bhupesh karankar Reply 85 vivek June 2, 2007 bhupesh, Open those port using iptables rules as this script locks down eveything. read my comment # 82. If you have more questions please post to our forum. Reply 86 Maroon Ibrahim June 11, 2007 Prashant!!! allow access for ICP Regards Reply 87 Nandkishor June 11, 2007 Hi, I configured the transperant proxy & also set the IPtables. This is working fine. But recentaly I trust by a trouble. If I try to open any site like gmail.com or any other sites. Some time that are works but some time they give follwing error. 113 The requested URL could not be retrieved While trying to retrieve the URL: http://gmail.com/ The following error was encountered: Unable to determine IP address from host name for gmail.com The dnsserver returned: Refused: The name server refuses to perform the specified operation. This means that: The cache was not able to resolve the hostname presented in the URL. Check if the address is correct. Your cache administrator is root. Pleas give me the solution for this. Regards, Nandkishor Reply 88 Linuxnewbie June 11, 2007 Hi, I need to install transparent proxy with squid caching, but my eth0 is connected using DHCP, so what all changes need to be done ? Thank you for publishing your experiences and configurations… Regards Reply 89 vivek June 11, 2007 Hi Linuxnewbie, Make sure eth0 always get same IP using eth0, if not possible modify a script to obtain IP address using following statement: ifconfig eth0 | grep 'inet addr:' | cut -d':' -f2 | awk '{ print $1}' Set SQUID_SERVER as follows: SQUID_SERVER=$(ifconfig eth0 | grep 'inet addr:' | cut d':' -f2 | awk '{ print $1}') 114 NOTE: you only need to use above, if SQUID_SERVER ip is dynamic; otherwise it should work out of box. HTH Reply 90 linxnewbie June 12, 2007 Thanks for the reply…so no need to make any changes in the IPTABLES, right ? Reply 91 chandar June 25, 2007 Hi Vivek, I configured squid/2.6.STABLE12 with the help of your script file. below is my N/W scenario client–> Squid + Router –> pix–> Router–> Internet. In this case everything is working very fine. For few minutes. After sometime client not able to ping gateway that is my squid server. But client able to ping next hope ip I’s Pix ip or router ip. This problem is resolved when I restart network service of Linux machine. and it’s happened every time. Please find below linux machine iptables snap. # squid server IP SQUID_SERVER=”10.30.200.1″ # Interface connected to Internet INTERNET=”eth0″ # Interface connected to LAN LAN_IN=”eth1″ # Squid port SQUID_PORT=”3128″ # DO NOT MODIFY BELOW # Clean old firewall iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X # Load IPTABLES modules for NAT and IP conntrack support 115 modprobe ip_conntrack modprobe ip_conntrack_ftp # For win xp ftp client modprobe ip_nat_ftp echo 1 > /proc/sys/net/ipv4/ip_forward # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT ACCEPT # Unlimited access to loop back iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow UDP, DNS and Passive FTP iptables -A INPUT -i $INTERNET -m state –state ESTABLISHED,RELATED -j ACCEPT # set this system as a router for Rest of LAN iptables –table nat –append POSTROUTING –out-interface $INTERNET -j MASQUERADE iptables –append FORWARD –in-interface $LAN_IN -j ACCEPT # unlimited access to LAN iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT # DNAT port 80 request comming from LAN systems to squid 3128 ($SQUID_PORT) aka transparent proxy iptables -t nat -A PREROUTING -i $LAN_IN -p tcp –dport 80 -j DNAT – to $SQUID_SERVER:$SQUID_PORT # if it is same system iptables -t nat -A PREROUTING -i $INTERNET -p tcp –dport 80 -j REDIRECT –to-port $SQUID_PORT # DROP everything and Log it iptables -A INPUT -j LOG iptables -A INPUT -j DROP Reply 92 chandar June 25, 2007 Hi Vivek, I configured squid/2.6.STABLE12 with the help of your script file. below is my N/W scenario client–> Squid + Router –> pix–> Router–> Internet. 116 In this case everything is working very fine. For few minutes. After sometime client not able to ping gateway that is my squid server. But client able to ping next hope ip I’s Pix ip or router ip. This problem is resolved when I restart network service of Linux machine. and it’s happened every time. Please help me to resolve this issue. Regards, Chandru Reply 93 shellyacs June 27, 2007 Need help. I have read the forum on transparent proxy. I have followed it to the letter. A cannot get it to work. I am using Suse linux 10.2. I can get to the internet from the workstations, but only if I setup the squid server as a proxy in IE. Any help would be greatly appreciated. Thanks Reply 94 Amrendra July 6, 2007 I have used above kind of firewall (IPTABLE), I don’t want to use transparent proxy because we need to use authentication, and if I am allowing forward and unlimited access to LAN then they are also able to bypass the proxy to use internet, So can anyone give me solution that, for accessing websites ( http/https) people must go through Proxy and its authentication, and rest for everything they should be allowed from the LAN rest everything includes (FTP , DNS ) respose. Thanks Amrendra. Reply 95 forweb July 9, 2007 I had got some errors when I used the instructions above, 400 something like syntax of the request was wrong… The script above works great but this is what I have to add to get it to work on my ubuntu 7.04 squid.conf: http_port 80 http_port 192.168.1.9:3128 transparent 117 (this is NIC connected to internet) acl jamal_net src 192.168.2.0/24 (this LAN Nic) http_access allow jamal_net http_access allow localhost Change your IP’s to comply with you above script. start your squid.conf start your fw-proxy add it to rc.local so it will boot at startup. Reply 96 oj July 16, 2007 Execellent write-up.Very helpful to me Reply 97 Slavko July 26, 2007 From SquidFaq For Squid-2.6 and Squid-3.0 you simply need to add the keyword transparent on the http_port that your proxy will receive the redirected requests on as the above directives are not necessary and in fact have been removed in those releases: http_port 3128 transparent Reply 98 eq1425 July 29, 2007 hi all, will this shel script work even if i install a redirector program(i.e squidguard)on squid?and how?? thanks Reply 99 John August 5, 2007 I work in a public library and we provide wireless access to our patrons. No configuration is required on their laptops because transparent proxying is in effect, via a rule in SUSE Firewall. I’m using SUSE 10.2, SQUID, Dansguardian, and the SUSE2 Firewall. 118 Is it possible with my existing setup to also forward users to a custom home page that I have set up? This page will have our wireless policy, etc. on it. If so, how exactly would this be done? Thanks! Reply 100 ankush August 7, 2007 how configure best squid server on RHEL 5 i have create in RHEL 4 but i have problem about RHEL 5 Reply 101 Mani August 8, 2007 Hi, when i execute squid -z.the following error is appear. FATAL: Could not determine fully qualified hostname. Please set ‘visible_hostname’ Squid Cache (Version 2.6.STABLE13): Terminated abnormally. CPU Usage: 0.004 seconds = 0.004 user + 0.000 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Aborted but i configure visible_hostname myhostname in my squid.conf file.still the same error comming again.what can i do? Reply 102 IRFAN August 13, 2007 any one have squid configaration than can use any where Reply 103 Mark Ng August 15, 2007 I have a box running public IP on eth0 and private IP on eth1. Everything seems to be working but my sites running apache can’t be accessed via their Public IP anymore. However I can still access them via eth1. Any help is appreciated. Reply 119 104 Abdul Latif August 17, 2007 Sir, is there any solution regarding linux Squid Proxy which responsible to handle two ADSL internet connection. combining bandwidth, Provide loadsharing, feed back if one connection goes down. Reply 105 Elliott August 20, 2007 Thanks for your excellent site. I have followed your guide and set this up successfully. I will recommend this guide to anyone setting up a squid server. Elliott Systems Administrator Reply 106 Chris August 26, 2007 What about setting this up using the latest version of Squid? Fedora 6 comes with squid but the parameters mentioned above are not there. They have been updated. Any help? Reply 107 Chris August 26, 2007 DUH, i see the post explaining it. Disregard my last post Reply 108 vijay August 30, 2007 I like to know how to configure ftp and proxy for my internal use and external( internet) ftp with proxy. Please help Reply 109 king of the internet September 18, 2007 You said allowing port 443 out solves your problems, but in fact it creates more. Now users can simply use SSL-based web proxies to tunnel past 120 your proxy. This means no logging, control, nothing. For example, try https://vtunnel.com/ Reply 110 vivek September 19, 2007 King, You cannot redirect port 443 with a transparent proxy and this the only solution. Other option is disable a transparent proxy and use port such as 3128. HTH Reply 111 Saji Alexander October 22, 2007 Hi, I had gone thru your notes. It is very good and interesting. I have 2 network cards in my squid proxy server on centos. I need all the users to access only certain sites during the office hours and after office hours they can access anysites as they wish. This should not be applicable for managers who can access anysite at anytime. This I made it but when I configured squid I had given the port 8080 instead of 3128 the default port. The end users if the remove the proxy (ip of squid server) then they can access any site during the office hours. How to disable this ???? Something to do with firewall. I tried but I failed. I am pasting it can you correct it. $IPTABLES -t nat -A PREROUTING -i $INTIF -p tcp –dport 80 -j DNAT –to $SQUID_SERVER:$SQUID_PORT squid_server has two network card. One is having internal ip and the other external ip. I had give external ip for SQUID_SERVER. SQUID_PORT is 8080 Thanks and Regards, Saji Alexander. Reply 121 112 Wolfox October 25, 2007 Anyone knows how to get this instructions working on SuSe 9 Enterprise Edition…. It looks like some of the syntax doesn’t work. Because in my case I cannot get it to work. Please help, I’m a newbie that is very eager to learn about proxying. Please Help… Thanks in advance Reply 113 hanz October 25, 2007 I have read your instruction but I have the same question as Saji ALexander. I have been trying to figure this out but failed. Is it possible to force all browser on a server running transparent proxy to use its proxy service for its web traffic? The server has dual interface. Thanks hanz Reply 114 vivek October 25, 2007 @Saji, You have to define TIME based ACL for squid to put time based restrictions. @hanz, yup, this config force all http traffic via squid. Reply 115 harish November 24, 2007 Hi Dear, Thanks or very simple steps. Harish Reply 116 fmstereo November 28, 2007 I have configured the transparent proxy but not all users are able to use it. Most of them must have the proxy in their browsers, just a few are able to 122 conect without having to configure. And is very slow with transparent proxy. Any sugestions? Reply 117 Babu Ram Dawadi December 12, 2007 thanks for ur three steps to create transparent proxy but i am not sure it works with squid 2.6 stables 13. because i tried ur step on this squid 2.6. may be this article suit to squid 2.5. :) hi fmstereo>>i think u have to enable one options on ur proxy which is previously off like the following httpd_accel_no_pmtu_disc off change it to httpd_accel_no_pmtu_disc on Reply 118 Atman December 12, 2007 Why not use only one utility to filter out comments and empty lines when going through squid.conf: grep -v ^# /etc/squid/squid.conf | grep -v ^$ or if you prefer sed: sed ‘/ *#/d; /^ *$/d’ < /etc/squid/squid.conf Reply 119 arun December 13, 2007 give me a step of linux centos proxy setting and iptables confige and many more service starting Reply 120 Vijay Godiyal December 20, 2007 Hello Friends, Need help from you… I had configured my squid server, squid+dansguardian with Linux RHCL4 .. its working for a hrs abustaly fine but abt 1 hrs its getting slow and get stoped work .. i m not able to understand the problem. normail proxy is working fine… but when it get started with dansguardian then problenm comes…. 123 can someone help me out on this i have squid version squid2.5.STABLE6-3.4E.11 and dansG is dansguardian-2.8.0.6-1.2.el4.rf following is the conf file … dansguardian…. ################################################# DansGuardian config file for version 2.8.0 # **NOTE** as of version 2.7.5 most of the list files are now in dansguardianf1.conf # Web Access Denied Reporting (does not affect logging) # # -1 = log, but do not block – Stealth mode # 0 = just say ‘Access Denied’ # 1 = report why but not what denied phrase # 2 = report fully # 3 = use HTML template file (accessdeniedaddress ignored) – recommended # reportinglevel = 3 # Language dir where languages are stored for internationalisation. # The HTML template within this dir is only used when reportinglevel # is set to 3. When used, DansGuardian will display the HTML file instead of # using the perl cgi script. This option is faster, cleaner # and easier to customise the access denied page. # The language file is used no matter what setting however. # languagedir = ‘/etc/dansguardian/languages’ # language to use from languagedir. language = ‘ukenglish’ # Logging Settings # 0 = none 1 = just denied 2 = all text based 3 = all requests loglevel = 2 # Log Exception Hits # Log if an exception (user, ip, URL, phrase) is matched and so # the page gets let through. Can be useful for diagnosing # why a site gets through the filter. on | off logexceptionhits = on 124 # Log File Format # 1 = DansGuardian format 2 = CSV-style format # 3 = Squid Log File Format 4 = Tab delimited logfileformat = 1 # Log file location # # Defines the log directory and filename. #loglocation = ‘/var/log/dansguardian/access.log’ # Network Settings # # the IP that DansGuardian listens on. If left blank DansGuardian will # listen on all IPs. That would include all NICs, loopback, modem, etc. # Normally you would have your firewall protecting this, but if you want # you can limit it to only 1 IP. Yes only one. filterip = # the port that DansGuardian listens to. filterport = 3128 # the ip of the proxy (default is the loopback – i.e. this server) proxyip = 172.16.24.12 # the port DansGuardian connects to proxy on proxyport = 8080 # accessdeniedaddress is the address of your web server to which the cgi # dansguardian reporting script was copied # Do NOT change from the default if you are not using the cgi. # accessdeniedaddress = ‘http://YOURSERVER.YOURDOMAIN/cgibin/dansguardian.pl’ # Non standard delimiter (only used with accessdeniedaddress) # Default is enabled but to go back to the original standard mode dissable it. nonstandarddelimiter = on # Banned image replacement # Images that are banned due to domain/url/etc reasons including those # in the adverts blacklists can be replaced by an image. This will, # for example, hide images from advert sites and remove broken image # icons from banned domains. # 0 = off 125 # 1 = on (default) usecustombannedimage = 1 filtergroupslist = ‘/etc/dansguardian/filtergroupslist’ # Authentication files location bannediplist = ‘/etc/dansguardian/bannediplist’ exceptioniplist = ‘/etc/dansguardian/exceptioniplist’ banneduserlist = ‘/etc/dansguardian/banneduserlist’ exceptionuserlist = ‘/etc/dansguardian/exceptionuserlist’ # Show weighted phrases found # If enabled then the phrases found that made up the total which excedes # the naughtyness limit will be logged and, if the reporting level is # high enough, reported. on | off showweightedfound = on # Weighted phrase mode # There are 3 possible modes of operation: # 0 = off = do not use the weighted phrase feature. # 1 = on, normal = normal weighted phrase operation. # 2 = on, singular = each weighted phrase found only counts once on a page. # weightedphrasemode = 2 # Positive result caching for text URLs # Caches good pages so they don’t need to be scanned again # 0 = off (recommended for ISPs with users with disimilar browsing) # 1000 = recommended for most users # 5000 = suggested max upper limit urlcachenumber = 5000 # # Age before they are stale and should be ignored in seconds # 0 = never # 900 = recommended = 15 mins urlcacheage = 9000 # Smart and Raw phrase content filtering options # Smart is where the multiple spaces and HTML are removed before phrase filtering # Raw is where the raw HTML including meta tags are phrase filtered # CPU usage can be effectively halved by using setting 0 or 1 # 0 = raw only # 1 = smart only 126 # 2 = both (default) phrasefiltermode = 2 # Lower casing options # When a document is scanned the uppercase letters are converted to lower case # in order to compare them with the phrases. However this can break Big5 and # other 16-bit texts. If needed preserve the case. As of version 2.7.0 accented # characters are supported. # 0 = force lower case (default) # 1 = do not change case preservecase = 0 # Hex decoding options # When a document is scanned it can optionally convert %XX to chars. # If you find documents are getting past the phrase filtering due to encoding # then enable. However this can break Big5 and other 16-bit texts. # 0 = disabled (default) # 1 = enabled hexdecodecontent = 0 # Force Quick Search rather than DFA search algorithm # The current DFA implementation is not totally 16-bit character compatible # but is used by default as it handles large phrase lists much faster. # If you wish to use a large number of 16-bit character phrases then # enable this option. # 0 = off (default) # 1 = on (Big5 compatible) forcequicksearch = 0 # Reverse lookups for banned site and URLs. # If set to on, DansGuardian will look up the forward DNS for an IP URL # address and search for both in the banned site and URL lists. This would # prevent a user from simply entering the IP for a banned address. # It will reduce searching speed somewhat so unless you have a local caching # DNS server, leave it off and use the Blanket IP Block option in the # bannedsitelist file instead. reverseaddresslookups = off 127 # Reverse lookups for banned and exception IP lists. # If set to on, DansGuardian will look up the forward DNS for the IP # of the connecting computer. This means you can put in hostnames in # the exceptioniplist and bannediplist. # It will reduce searching speed somewhat so unless you have a local DNS server, # leave it off. reverseclientiplookups = off # Build bannedsitelist and bannedurllist cache files. # This will compare the date stamp of the list file with the date stamp of # the cache file and will recreate as needed. # If a bsl or bul .processed file exists, then that will be used instead. # It will increase process start speed by 300%. On slow computers this will # be significant. Fast computers do not need this option. on | off createlistcachefiles = on # POST protection (web upload and forms) # does not block forms without any file upload, i.e. this is just for # blocking or limiting uploads # measured in kibibytes after MIME encoding and header bumph # use 0 for a complete block # use higher (e.g. 512 = 512Kbytes) for limiting # use -1 for no blocking #maxuploadsize = 512 #maxuploadsize = 0 maxuploadsize = -1 # Max content filter page size # Sometimes web servers label binary files as text which can be very # large which causes a huge drain on memory and cpu resources. # To counter this, you can limit the size of the document to be # filtered and get it to just pass it straight through. # This setting also applies to content regular expression modification. # The size is in Kibibytes – eg 2048 = 2Mb # use 0 for no limit maxcontentfiltersize = 256 # Username identification methods (used in logging) # You can have as many methods as you want and not just one. The first one # will be used then if no username is found, the next will be used. 128 # * proxyauth is for when basic proxy authentication is used (no good for # transparent proxying). # * ntlm is for when the proxy supports the MS NTLM authentication # protocol. (Only works with IE5.5 sp1 and later). **NOT IMPLEMENTED** # * ident is for when the others don’t work. It will contact the computer # that the connection came from and try to connect to an identd server # and query it for the user owner of the connection. usernameidmethodproxyauth = on usernameidmethodntlm = off # **NOT IMPLEMENTED** usernameidmethodident = off # Preemptive banning – this means that if you have proxy auth enabled and a user accesses # a site banned by URL for example they will be denied straight away without a request # for their user and pass. This has the effect of requiring the user to visit a clean # site first before it knows who they are and thus maybe an admin user. # This is how DansGuardian has always worked but in some situations it is less than # ideal. So you can optionally disable it. Default is on. # As a side effect disabling this makes AD image replacement work better as the mime # type is know. preemptivebanning = on # Misc settings # if on it adds an X-Forwarded-For: to the HTTP request # header. This may help solve some problem sites that need to know the # source ip. on | off forwardedfor = off # if on it uses the X-Forwarded-For: to determine the client # IP. This is for when you have squid between the clients and DansGuardian. # Warning – headers are easily spoofed. on | off usexforwardedfor = off # if on it logs some debug info regarding fork()ing and accept()ing which # can usually be ignored. These are logged by syslog. It is safe to leave 129 # it on or off logconnectionhandlingerrors = on # Fork pool options # sets the maximum number of processes to sporn to handle the incomming # connections. Max value usually 250 depending on OS. # On large sites you might want to try 180. maxchildren = 120 # sets the minimum number of processes to sporn to handle the incomming connections. # On large sites you might want to try 32. minchildren = 8 # sets the minimum number of processes to be kept ready to handle connections. # On large sites you might want to try 8. minsparechildren = 4 # sets the minimum number of processes to sporn when it runs out # On large sites you might want to try 10. preforkchildren = 6 # sets the maximum number of processes to have doing nothing. # When this many are spare it will cull some of them. # On large sites you might want to try 64. maxsparechildren = 32 # sets the maximum age of a child process before it croaks it. # This is the number of connections they handle before exiting. # On large sites you might want to try 10000. maxagechildren = 500 # Process options # (Change these only if you really know what you are doing). # These options allow you to run multiple instances of DansGuardian on a single machine. # Remember to edit the log file path above also if that is your intention. # IPC filename # # Defines IPC server directory and filename used to communicate with the log process. ipcfilename = ‘/tmp/.dguardianipc’ 130 # URL list IPC filename # # Defines URL list IPC server directory and filename used to communicate with the URL # cache process. urlipcfilename = ‘/tmp/.dguardianurlipc’ # PID filename # # Defines process id directory and filename. #pidfilename = ‘/var/run/dansguardian.pid’ # Disable daemoning # If enabled the process will not fork into the background. # It is not usually advantageous to do this. # on|off ( defaults to off ) nodaemon = off # Disable logging process # on|off ( defaults to off ) nologger = off # Daemon runas user and group # This is the user that DansGuardian runs as. Normally the user/group nobody. # Uncomment to use. Defaults to the user set at compile time. # daemonuser = ‘nobody’ # daemongroup = ‘nobody’ # Soft restart # When on this disables the forced killing off all processes in the process group. # This is not to be confused with the -g run time option – they are not related. # on|off ( defaults to off ) softrestart = off Reply 121 Robert December 22, 2007 I am building a rather unique Proxy server I need to be able to forward requests by maching the destintaions to 3 lists: - blacklist -> Block, - freelist -> Forward to upstreem Proxy with Spesified username and 131 password same for all, - DirrectAccesslist – Retreve directly, What ever is remaining is forward to the upstreem proxy which will request username and password for charging purposes. The AD and charging Side of this I will work out later, it is the routeing with creds by list lookup that I have no idea where to start.. Site info 300 computers, 1000 users, 40M internet link I have a Dual Xeon 1.6 with 2G ram SCSI HW Raid HDD Server for the task (retired Ms Server) Ideas? Thanks Reply 122 Sai Wunna Aung January 5, 2008 hello all friends, pls help me. now i created squid 2.6 server on windows server 2003. but our ISP is burnned some websites.e.g http://mail.yahoo.com, https://mail.google.com .so, i want to open that web site and other to squid’s redirect setting. i want to know http redirect setting of squid 2.6. best reguards, Sai Wunna Aung Network Technician Reply 123 Ali Bhai January 8, 2008 hey, nice work. I appreciate the way u spread your knowledge just alike a teacher spreads to new bie’s. Thx Again Reply 124 Ambot January 11, 2008 Hey guys, How do i able to open the ports in proxy? i have the problems on my network, in which i can’t able to view webcam and voice in the yahoo messenger… 132 As what i know 5000-5010 used for voice both tcp and udp while 5100 for video as tcp… I put it in Safe_ports but it seems not working… And also i’m not able to upload files but good downloadings…. Reply 125 Sajid January 11, 2008 Hi, Please help me to solve this problem. i have four network cards in linux machine 3 NC for WAN 1 for local LAN my squid is sending all the internet traffic to only on one network card other two are free its is possible that squid bind three wan NC and combine the Internet. thanks Reply 126 Arulkumar January 19, 2008 how to manage users browsing time quotas by squid. Example: Set a limit of 1 hour per day for the user Reply 127 dennyhalim January 24, 2008 dual xeon with 8 gig ram? how many (hundreds?) users this monster serve??? i’m using old refurbished p3 with 384meg ram serving 50+ heavy downloaders users with no problem. and, with ipcop, it only takes TWO clicks to activate transparent proxy from its web gui. off course, you learn nothing with ipcop. coz it’s simply usable and minimal learning curve. you’ll learn a lot from getting dirty on cli. :) Reply 128 Mangal January 31, 2008 133 How can we block PC using Mac addresses ? I tried by: – acl block arp 12:23:43:df:32:df but my squid does not know keyword arp for solving this i tried to rebuild it but i failed can u help me to rebuild ? Reply 129 vivek January 31, 2008 Mangal, See our Squid MAC Filtering FAQ Reply 130 Anas January 31, 2008 Dear all Need Help …. I have Squid 2.6 STABLE6 Actually when I add httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on acl Tiajri src 10.0.0.0/24 http_access allow localhost http_access allow Tijari and when I tried to Stop And Start Squid service it gaves me Faild to start Faild …. please help me Reply 131 Pirkia.lt admin February 2, 2008 Simple script to save your users from badware: #!/bin/bash URL0=http://www.mvps.org/winhelp2002/hosts.txt URL1=http://everythingisnt.com/hosts SQUIDBADWARE=/etc/squid/badware_list BADWARESTATS=/etc/squid/badware_stats 134 wget $URL0 -O /tmp/SQUIDBADWARE0 -o /dev/null wget $URL1 -O /tmp/SQUIDBADWARE1 -o /dev/null BADWARE0=`cat /tmp/SQUIDBADWARE0` echo "$BADWARE0" >> /tmp/SQUIDBADWARE1 cat /tmp/SQUIDBADWARE1 | grep 127.0.0.1 | sed 's/127.0.0.1 //g' > /tmp/SQUIDBADWARE2 cat /tmp/SQUIDBADWARE2 | grep -v localhost | cut -d "#" -f 1 > /tmp/SQUIDBADWARE3 rm $SQUIDBADWARE.backup mv $SQUIDBADWARE $SQUIDBADWARE.backup cp /tmp/SQUIDBADWARE3 $SQUIDBADWARE SUM=`wc -l $SQUIDBADWARE` DATE=`date +%Y-%m-%d` echo "$DATE $SUM" >> $BADWARESTATS rm /tmp/SQUIDBADWARE0 /tmp/SQUIDBADWARE1 /tmp/SQUIDBADWARE2 /tmp/SQUIDBADWARE3 /etc/init.d/squid reload > /dev/null To squid.conf add/update following lines: acl BADWARE_LIST_1 dstdomain url_regex -i "/etc/squid/badware_list" deny_info ERR_BADWARE_ACCESS_DENIED BADWARE_LIST_1 ….. http_access deny BADWARE_LIST_1 http_access deny !Safe_ports BADWARE_LIST_1 http_access deny CONNECT !SSL_ports Don’t forget add this script to your crontab crontab –e 30 23 * * * /data/scripts/squidguard.sh Reply 132 Faisal February 5, 2008 Dear I am using CentOS Linux server here I don’t need to define proxy in squid.conf. 135 kindly guide me how to use without ISP proxy. also i have 3 DSL modems connected in office and i need to configure all together if 1 is not working it switch to other automatically. your quick response will be higly appreciative. Best Regards. Faisal Reply 133 Santosh February 8, 2008 Hi, This site is good with good comments. can you help me. i am using the same config. Pls clear my 2 doubts. 1.after making proxy transparent. the sites which are blocked in squidblock.acl does not works from client pc. (again if we use a proxy server then only it works). 2. how to block a website (such as http://www.youtube.com) using iptables. regards, Santosh Reply 134 Santosh February 8, 2008 hello, pls reply ASAP. regards, santosh Reply 135 nandhakumar February 22, 2008 Hi all I configured squid proxy in our office but problem is outlook express not working please help me out.. regards nandha Reply 136 136 vaibhavraj June 29, 2010 Hi, Just put IP of outlook machine as a acl in squid.conf. It will work. Regards, Vaibhavraj Reply 137 Sulman March 5, 2008 Dear, i have 3 NIC in Squid Proxy, One connect with Lan and other 2 connect with 2 DSL modems. I want to combine more than 1 DSL link speed togetehr. Kindly Helo me regarding this what will be need to configure in Linux. Halp me ASAP Thanks Reply 138 Jit March 13, 2008 Hi, I’ve configured my Squid as par your guidence but am nt able to access any website from client nor I’m able to ping. though I’m able to open some of websites from their IP and even able to open control panel of my ADSL Router! I’ve no clue where things are wrong! :( I wud highly be grateful to you help me to fix this issue! here is the complete scenario of my network [LAN] —> e1 [ SQUID ] e0 —-> [ADSL] 192.168.2.0 [LAN] 192.168.2.1 [e1 of squid] 192.168.1.2 [e0 of squid] 192.168.1.1 [adsl router ip] waiting despreatly! Rock on Jit 137 Reply 139 Yusuf March 15, 2008 I have configured SQUID PROXY with TRANSPARENT using this site help Thanks Reply 140 gautam April 8, 2008 I had gone throug your notes. It is very good and interesting. I have 2 network cards in my squid proxy server on RHEL5. I need all the users to access only certain sites during the office hours and after office hours they can access any sites as they wish. This should not be applicable for managers who can access any site at anytime. This I made it but when I configured squid I had given the port 8080 instead of 3128 the default port. The end users if the remove the proxy (ip of squid server) then they can access any site during the office hours. How to disable this ???? Something to do with firewall. I tried but I failed. I am pasting it can you correct it. $IPTABLES -t nat -A PREROUTING -i $INTIF -p tcp –dport 80 -j DNAT –to $SQUID_SERVER:$SQUID_PORT squid_server has two network card. One is having internal ip and the other external ip. I had give external ip for SQUID_SERVER. SQUID_PORT is 8080 Please help me.. It is very urgent. Thanks and Regards, Reply 141 flex April 11, 2008 I have a clarkconnect linux box am not that good in linux but can configure when given the example. My network has layer three switch which does the routing for all Vlans. I have created a specia Vlan where all traffic fron the LAN Vlans is routed, 138 coonected this node to CC box LAN interface. Also i have added the static routes on the CC box and all vlans can access the internet properly. But i want to use proxy. WHEN I START THE SQUID PROCESS it block all outgoing traffic and gives me the ip and port to configure as proxy on brower settings , that i do but still cannt connect. here is a file for my routes Adding extra LANs on Clark Connect #/etc/system/network file EXTRALANS=”10.0.2.0/24 10.0.3.0/24 10.0.4.0/24 10.0.5.0/24 10.0.6.0/24 10.0.7.0/24 10.0.8.0/24 10.0.9.0/24 10.0.10.0/24 10.0.11.0/24 10.0.12.0/24 10.0.13.0/24 10.0.14.0/24 10.0.15.0/24 10.0.16.0/24 10.0.17.0/24 10.0.18.0/24 10.0.19.0/24 10.0.20.0/24 10.0.21.0/24 10.0.22.0/24 10.0.23.0/24 10.0.24.0/24 10.0.25.0/24 10.0.26.0/24 10.0.27.0/24 10.0.28.0/24 10.0.29.0/24 10.0.30.0/24 10.0.31.0/24 10.0.32.0/24 10.0.33.0/24 10.0.34.0/24 10.0.35.0/24 10.0.36.0/24 10.0.37.0/24 10.0.38.0/24 10.0.39.0/24″ #Adding Static routes to Clark Connect for Vlans to work with proxy #This should work #/etc/sysconfig/network-scripts/route-eth1 10.0.2.0/24 via 10.2.56.2 10.0.3.0/24 via 10.2.56.2 10.0.4.0/24 via 10.2.56.2 10.0.5.0/24 via 10.2.56.2 10.0.6.0/24 via 10.2.56.2 10.0.7.0/24 via 10.2.56.2 10.0.8.0/24 via 10.2.56.2 10.0.9.0/24 via 10.2.56.2 10.0.10.0/24 via 10.2.56.2 10.0.11.0/24 via 10.2.56.2 10.0.12.0/24 via 10.2.56.2 10.0.13.0/24 via 10.2.56.2 10.0.14.0/24 via 10.2.56.2 10.0.15.0/24 via 10.2.56.2 10.0.16.0/24 via 10.2.56.2 10.0.17.0/24 via 10.2.56.2 10.0.18.0/24 via 10.2.56.2 10.0.19.0/24 via 10.2.56.2 10.0.20.0/24 via 10.2.56.2 139 10.0.21.0/24 via 10.2.56.2 10.0.22.0/24 via 10.2.56.2 10.0.23.0/24 via 10.2.56.2 10.0.24.0/24 via 10.2.56.2 10.0.25.0/24 via 10.2.56.2 10.0.26.0/24 via 10.2.56.2 10.0.27.0/24 via 10.2.56.2 10.0.28.0/24 via 10.2.56.2 10.0.29.0/24 via 10.2.56.2 10.0.30.0/24 via 10.2.56.2 10.0.31.0/24 via 10.2.56.2 10.0.32.0/24 via 10.2.56.2 10.0.33.0/24 via 10.2.56.2 10.0.34.0/24 via 10.2.56.2 10.0.35.0/24 via 10.2.56.2 10.0.36.0/24 via 10.2.56.2 10.0.37.0/24 via 10.2.56.2 10.0.38.0/24 via 10.2.56.2 10.0.39.0/24 via 10.2.56.2 which other file should i configure for web proxy to work IP and port CC is giving for proxy is 10.2.56.2 8080 or 3128 but does not work Reply 142 Sohbet April 27, 2008 hey, nice work. I appreciate the way u spread your knowledge just alike a teacher spreads to new bie’s. Thx Again Reply 143 Ye khaung May 8, 2008 I just test smooth wall express with in built squid. Not only in that squid but all, i can’t find where to put web server chaining i.e forward request to upstream proxy(isp’s proxy). Can any one explain me about following case. 140 My server have 2 NIC card. Eth0 : 10.254.8.1.1 (internet) Eth1 : 192.168.0.1 (Lan) Subnet: 255.255.252.0 D.G : 10.254.8.1 My isp give their proxy ip and port. 203.81.71.148:9090 They prevent direct access. In that case i want a proxy server in my own. I want my clients computers to use proxy of mine but not ISP. (i want them to put my server Eth1 no as a proxy ip and port 9090 in ther IE and fire fox) Can any one give me a sample scripts? Please help me out. Our country is not very familiar with linux. S.O.S Ye Khaung Burma Reply 144 Peyman June 8, 2008 Excellent! Simply it worked. But after running the iptables shell script I could not reach my server via SSH or VNC. I had to comment these 4 lines of the script to get my remote access back. iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -j LOG iptables -A INPUT -j DROP Is it no problem commenting those lines? my squid is working as I want ;) Reply 145 Padani June 28, 2008 When i gave the above config to the squid on a VPS (Debain).The following errors came. I didn’t implement that iptable rules 141 root@x:/etc/squid# /etc/init.d/squid restart Restarting Squid HTTP proxy: squid2008/06/28 11:02:10| parseConfigFile: unrecognized: 2008/06/28 11:02:10| parseConfigFile: line 44 unrecognized: ‘httpd_accel_host virtual’ 2008/06/28 11:02:10| parseConfigFile: line 45 unrecognized: ‘httpd_accel_port 80′ 2008/06/28 11:02:10| parseConfigFile: line 46 unrecognized: ‘httpd_accel_with_proxy on’ 2008/06/28 11:02:10| parseConfigFile: line 47 unrecognized: ‘httpd_accel_uses_host_header on’ 2008/06/28 11:02:10| WARNING cache_mem is larger than total disk cache space! FATAL: No port defined Squid Cache (Version 2.6.STABLE5): Terminated abnormally. CPU Usage: 0.005 seconds = 0.000 user + 0.005 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 /etc/init.d/squid: line 74: 30103 Aborted start-stop-daemon –quiet –start – pidfile $PIDFILE –chuid $CHUID –exec $DAEMON — $SQUID_ARGS </dev/null Reply 146 ramesh July 25, 2008 Hi, I have a problem I configured Transparent proxy it is working fine. problem with web server wheni tried to access the web page from external network. Error message : ERROR The requested URL could not be retrieved Access Denied. Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect Reply 147 nazrin July 29, 2008 dear guys, 142 is there anyway of doing proxy on port 25 and 110. i wanted to test it with spamassassin checking on that port using transparent proxy. thanks, nazrin. Reply 148 Khalid August 2, 2008 I am running FC6, 2.6.STABLE13 and I need help 2 network cards: eth0 on a local LAN address 10.6.9.171 eth1 190.2.168.0.0/24 my server is running DHCP and assigning addresses to local clients But Squid is giving me a headache I did follow the stpes in this tutorial, and my Squid FAILS to start everytime Firt it gave me this error ACL name ‘Safe_ports’ not defined! FATAL: Bungled squid.conf line 19: http_access deny !Safe_ports Squid Cache (Version 2.6.STABLE13): Terminated abnormally. Then when I defiene Safe_ports by adding definitions that I got from another website is does not like the added lines and it asks for a hostname 2008/08/01 16:08:53| parseConfigFile: line 36 unrecognized: ‘http_accel_host virtual’ 2008/08/01 16:08:53| parseConfigFile: line 37 unrecognized: ‘http_accel_port 80′ 2008/08/01 16:08:53| parseConfigFile: line 38 unrecognized: ‘http_accel_with_proxy on’ 2008/08/01 16:08:53| parseConfigFile: line 39 unrecognized: ‘http_accel_uses_host_header on’ FATAL: Could not determine fully qualified hostname. Please set ‘visible_hostname’ Can someone please direct me on what I’m missing here ======================= here is my config file: hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? 143 no_cache deny QUERY hosts_file /etc/hosts refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl purge method PURGE acl CONNECT method CONNECT cache_mem 1024 MB http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports acl lan src 10.6.9.177 192.168.0.0/24 http_access allow localhost http_access allow lan http_access deny all http_reply_access allow all icp_access allow all visible_hostname proxytest httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on coredump_dir /var/spool/squid ================================ – Khalid Reply 149 Jakykong August 7, 2008 I thought I would mention that newer Squid versions (or maybe it’s older ones… I use 2.7) don’t accept the httpd_accel_* entries. Another way to do the same thing, which seems to work the same way, is to use the 144 http_port entry. When you set the port (3128 by default), you can add “transparent” to the end of the line to make the proxy transparent. Reply 150 shantanu August 7, 2008 hiii, i know very less abt squid and linux, m in a college and my isp has blocked many of the sites and downloads , i need to unblock those sites as want to see my favourite football matches, so plz will anyone guide me how to unblock these sites and see streaming videos, my isp uses squid/2.6.STABLE6, plz reply…………….. Reply 151 shantanu August 12, 2008 if any one knows plz tell me e mail id is [email protected] !!! Reply 152 Baku August 27, 2008 Excellent article. The firewall script works fine in my GNU/Linux Debian Etch. However, the squid.conf should be update to squid 2.6 a later versions, which have the specific ‘transparent’ parameter. In addition, should be convenient add a fourth step: configure named daemon on squid host. Best regards Baku Reply 153 we3cares September 2, 2008 Very Good Work… :) But, I can tell a small easier step instead of grep -v “^#” /etc/squid/squid.conf | sed -e ‘/^$/d’ Use: # grep -v “^#” /etc/squid/squid.conf | cat -s Reply 154 Umer August 5, 2010 Gud .. Its working now 145 Reply 155 MikeC September 25, 2008 Good write up…question though. After setting everything up I get the following error when I try to access a site: While trying to retrieve the URL: / The following error was encountered: * Invalid URL Some aspect of the requested URL is incorrect. Possible problems: * Missing or incorrect access protocol (should be `http://” or similar) * Missing hostname * Illegal double-escape in the URL-Path * Illegal character in hostname; underscores are not allowed Any ideas would be appreciated! Reply 156 Nandkishor September 26, 2008 Hi vivek, I have configured the transperant proxy & also Blocked the downloading of movies & songs. But some peoples are downloads by using the torrent or utorrent. Can u tell me how to blocked this torrent downloading by using squid or pear to pear? Reply 157 Rizwan Ahmed October 24, 2008 nice help Reply 158 cpyd October 26, 2008 this is funny. okay first of all, thanks vivek, thanks a ton for your fantabulous article. I setup two servers using your script and it works great. save one freak stuff.. while i see everyone running around saying they cant accept anything except port 80, my problem is exact opposite! ie.. it seems my firewall is allowing every damn traffic through itself, and no, i dint change a thing in the script except, ofcourse the variables in beginning. the iptables -L command gives this :- 146 Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED LOG all -- anywhere anywhere LOG level debug prefix `LOG_DROP ' ACCEPT all -- anywhere anywhere LOG all -- anywhere anywhere LOG level warning DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere i commented out the unlimited LAN access line, and i was completely blocked out, including the webserver running on the same machine. Anyone out there who can point me in the right direction?? I want to allow only ports 25, 465, 110, 995, 443 and 80 through my proxy server.. thanks :) Reply 159 jayarm December 7, 2008 I want to allow two prot which used for VOIP (port 8661 10500) how can enable the same Please tell me with the example , i am using redhat my ip is 172.21.100.10 (eth0) 192.168.103.10 (eth1) Reply 160 Nick December 14, 2008 Is it possible to set a machine with one ethernet adapter on the network as a transparent proxy? 147 So my machine (“machine2″) on 10.0.0.2 becomes my default gateway (in the DHCP config), which in turn either transparently proxies or sends the packet on to the ‘real’ default gateway at 10.0.0.1. Machine2 would need to match incoming packets and if not destined for it, and not destined for port 80, forward them to the router. Incoming packets not destined for the machine2, but are destined for port 80, forward to the squid proxy. This would be neat, as it would simplify network layout, avoid having to have two subnets, and make bypassing the proxy a simple method of adding a static network config with a different default gateway. Reply 161 bashir December 26, 2008 Hi i m using squid 2.6 in Centos 5.1. But i found some errors: 1. arp 2. when i blocked the ip’s but even that allow please helpd bashir pakistan islamabad Reply 162 khzied December 28, 2008 Hi everybody, I have a problem with squid.. In my network internet, i would like to have connection in the same time like this: * some ip address connect to internet with authentification * some ip address connect to internet without authentification How can i do in squid configuration and iptables rules.. Thanks :) Reply 163 khzied December 28, 2008 with ipcop, i use the type “unrestricted user” that access internet without authentification.. Other user without type “unrestricted user” should connect by authentification.. 148 How can i do? Ps: I use squid 3.0 Thanks Reply 164 brijesh January 10, 2009 dear sir Sir i want to installation squitd proxy but not installedd please give the setup and how do you installed Reply 165 Ibru January 19, 2009 Hi, You have done an excellent work. How can I run fw.proxy script every time when my computer starts. Thanks Ibrhaim PP Reply 166 Bjornar January 28, 2009 Hi. When i load the script I get a error message: iptables: No chain/target/match by that name Someone know whats wrong? im a noob (A) Reply 167 needh January 29, 2009 I use your squid on ubuntu 7.04. It complains no httpd_accel, etc. If I remove those lines in squid.conf, that’s no proxy at all. Nothing in access.log. Reply 168 baxbixbux February 20, 2009 good … now i can setup squid 149 Reply 169 col February 23, 2009 Hi – thanks for the really useful information. I have now setup my main PC as a transparent proxy so can log and see all the websites that my family lan has been to. Is there a way to also log all MSN chat messages using squid? (we have a policy of open internet access, with the responsibility of where they choose to go being on the child, with them knowing that occasion spot checks of the logs will be carried out). Reply 170 iniabasi February 25, 2009 i have gone through all the comments here and I have done everything – configuring the squid 2.7 stable 13 and iptables in ubuntu 8.10. my problem is that i only browse when i fix the proxy in the explorer, the transparency does not work. when i add this line of code, i have errors: httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on. I am really at a loss on what to do. This what my squid conf looks like acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl ECONOMICS src 10.0.0.0/24 # RFC1918 possible internal network http_access allow ECONOMICS acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http 150 acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow ECONOMICS icp_access deny all http_port 80 http_port 3128 transparent hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY\s[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT visible_hostname EconnetServer hosts_file /etc/hosts coredump_dir /var/spool/squid Please can someone help me. Thanks. Reply 171 manjunath February 25, 2009 151 Hi, I do have setup internet->router(cisco 2600)->firewall (506 E)->Cisco Switch (6500) no routing captability ->DHCP Server->Lan . Planning to have Squid transparent proxy. Plz help me how to setup I am new to Squid project. Manjunath Reply 172 Xavier February 27, 2009 Hi all, My Squid server works fantastically with the script above if I only have 2 network adapters enabled. I have an eth2 that I wish Apache to listen on as I was getting some oddities with it running on eth0 and eth1 which i am guessing is attributed to SQUID. I can configure Apache to listen on eth2 ok, the problem is as soon as I enable and start eth2 everything dies. eth0 and eth1 are unpingable and squid doesn’t work. All I am doing is an out of the box version of squid with a very basic conf and the script above. Any help? Thanks, Xavier. Reply 173 hana March 5, 2009 is it possible to implament transparent proxy using only one NIC? Reply 174 kpm March 14, 2009 We are using two ip numbers for accessing internet and intranet. The IP 172.16.0.0/24 is for accessing our Intranet application from our remote office. The IP 192.168.1.0/24 is local broadband connection used for accessing internet locally. I want to access both the connection in a single 152 IP by configuring linux squid proxy sever. Can u please help me out how to do the settings. Reply 175 Christofer March 17, 2009 Thanks cyberciti for the great tutorial, help me a lot. Reply 176 vijay March 29, 2009 This setup can use in fedora 10 Reply 177 Tricky April 15, 2009 I like how you’ve built this post. The httpd entries don’t seem to work on my server however its not a particularly important function for me. I think perhaps it wasn’t built into the build I have from Arch Linux. On a purely academic note, I often work with grep and sed and I recognised some even shorter ways to strip the squid.conf file. The shortest is still a combination: grep . /etc/squid/squid.conf|sed '/ *#/d' unless you want to actually strip it inline: sed -i '/ *#/d; /^ *$/d' /etc/squid/squid.conf Reply 178 Bruce Smith April 16, 2009 I’m looking for help for a fix. i work at a school. and im looking to run squid to speed up net access i have 2 up stream proxy’s we use 1 for kids 1 for staff, and i want to bind them in to 1 proxy in school with 2 ports. so port 8080 for students caching from upstream proxy student.proxy port 80 so port 8099 for staff caching from upstream proxy staff.proxy port 80 any one any clues ? Reply 179 nichive April 26, 2009 to da point, I need some help with this configuration 153 I’m running my squid on Ubuntu Server 8.10 with the transparent configuration applied, and the iptables script made, without any error on the start/restart part. but my problem is, I can’t open anything through any web-browser that is installed on my Local Area Network but if I try some ping command to any web-address, it works fine pitty, not doing so with the web-browser anyhelp would be appreciated :) Reply 180 nichive April 26, 2009 ignore my last question, I found out what my problem was.. my machine was a fresh installed one, didn’t have the masquerading method… just run the following command and voila $ sudo apt-get install ipmasq Reply 181 dave love May 7, 2009 I am using this setup but I am having trouble connecting to port 443. Any ideas? Do I need to tell it to use 443 and 80 in the squid.conf? Reply 182 Md. Saidur Hasan May 10, 2009 hi boss, it’s working but problem with the email. i can’s download my email in outlook. my configuration is as follows # cat /etc/squid/squid.conf | sed ‘/ *#/d; /^ *$/d’ Output ——————– http_port 3128 transparent hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache broken_vary_encoding allow apache 154 cache_mem 32 MB access_log /var/log/squid/access.log squid auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd auth_param basic children 30 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl CONNECT method CONNECT acl bad_sites dstdomain “/etc/squid/squid-block.acl” http_access deny bad_sites acl esl src 172.16.10.0/24 http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow esl http_access deny all http_reply_access allow all icp_access allow all cache_mgr [email protected] visible_hostname ESL-NNC coredump_dir /var/spool/squid please help me.. Reply 183 chrkc May 25, 2009 Hi, I have three systems, my apache web server is running on 192.168.0.26 machine, squid/proxy is running on 192.168.0.25 and my firewall/shorewall is 155 running on 192.168.0.20 And there is a local network 192.168.0.X of systems with gateway mentioned as 192.168.0.20. Can anyone tell me how do i manage in a way that all the http requests made are directed to the squid/proxy? As the people in the local network through the browser direct connection are able to open sites that were restricted through the proxy settings. Thanks Reply 184 Wiki June 8, 2009 Where can i find or where should i paste the following commands? in line number? httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy httpd_accel_uses_host_header on Reply 185 Nand June 17, 2009 I have setup the squid using transperant proxy & in iptables I have chnge the polixy of filter table to DROP. Everything is working fine. But any idea how to block the torrent downloading? what iptables rules are want to setup? Regards, Nandkishor Reply 186 Rashid Iqbal June 27, 2009 hi friends I am new to linux. right now i am using the fedora… I configure the proxy and configure the iptables to forward the traffic Microsoft Outlook . now there is a problem that users are able to browse withoutt the client proxy settings…… although I only add the iptables script that forward the port 80 traffic to port 3128 that users should go through proxy… secondly we are using the citrix server……… how to enable remote users to connect out db server through citrix server… using TCP 1494 and 156 UDP is 1600 to 1699… and tcp is 80.. and how to restrict the wireless users that they should go thorugh proxy…. and finally I want that only some specific users to use the internet through client proxy settings and remaining will be blocked…. please help me in this regard……..I will be highly obliged.. Reply 187 Rashid Iqbal June 27, 2009 Friends I am new to squid I want to configure the proxy server with squid but not with the transparent…. like that every used should put the ipaddress+port 3128….. secondly I want to receive the emails on Microsoft Outlook… for this purpose I use the iptables now mail is working but user can bypass the proxy after putting the proxy address into the clients gateway.. please help me to solve this issue.. Reply 188 Anindya Banerjee July 6, 2009 How can I install and configure squid proxy in my red hat linux system. Reply 189 Mohd Anas July 14, 2009 Hi, Can someone suggest how can I configure my squid http proxy for FTP also. And what are the settings for ftp client like filezilla. Thanks Reply 190 Gregory I Okumoro July 22, 2009 Hi, I am new to Linux but I like what you have to say about port 80 redirection to port 3128. Currently, my website is unavailable online because the Cable Company (ISP) has blocked all the ports that I have to work except port 3128. 157 !. What is the directory of the firewalls to which I have to copy the “firewall” scripts? 2.What directory do I copy “fw.proxy” to? Thanks, Gregory Omkpokoro Reply 191 Ajit Upadhyay August 4, 2009 Hi! I have a server with eth0 (10.126.2.101) connected to my ISP (proxy 10.31.31.10:3128 with authentication ie. userid/pwd) and eth1 (192.168.1.1) connected to local network through a fast ethernet switch. The server is also a DHCP sever for local network (192.168.1.2 – 192.168.1.254). Now, I have configured squid on this server so that local netwrok PCs can access internet thorugh my server (which is behind ISP’s authenticated proxy). The detail of squid.conf is listed below: ——————– acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 10.0.0.0/8 acl localnet src 172.16.0.0/12 acl localnet src 192.168.1.1 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Saf_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 158 acl Safe_ports port 591 acl Safe_ports port 777 acl purge method PURGE acl CONNECT method CONNECT access_log /var/log/squid/access.log acl plasma_net src 192.168.1.2 acl plasma_net src 192.168.1.3 acl plasma_net src 192.168.1.4 acl plasma_net src 192.168.1.5 http_access allow plasma_net acl lan src 10.126.2.101 192.168.1.1 http_access allow localhost http_access allow lan http_access allow all http_access allow localnet http_access deny all acl ftp proto FTP http_access allow ftp http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_reply_access allow all icp_access allow all icp_access allow localnet icp_access deny all 159 htcp_access allow localnet htcp_access deny all http_port 192.168.1.1:3128 transparent hierarchy_stoplist cgi-bin ? cache_mem 8 MB memory_replacement_policy lru cache_replacement_policy lru cache_dir ufs /var/cache/squid 100 16 256 minimum_object_size 0 KB maximum_object_size 4096 KB cache_swap_low 90 cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log emulate_httpd_log off ftp_passive on refresh_pattern ^ftp: 1440 20 10080 refresh_pattern ^gopher: 1440 0 1440 refresh_pattern (cgi-bin|\?) 0 0 0 refresh_pattern . 0 20 4320 always_direct allow all connect_timeout 2 minutes client_lifetime 1 days cache_mgr webmaster visible_hostname plasma1 icp_port 3130 error_directory /usr/share/squid/errors/English coredump_dir /var/cache/squid cache_swap_high 95 160 ——————When any PC on network tries to use internet, I get following error in my access.log and —————————————————— 1249380227.459 10 192.168.1.4 TCP_REFRESH_UNMODIFIED/304 259 GET http://webmail1.cat.ernet.in/newmail/images/dotted_bullet.gif – DIRECT/10.11.100.123 1249380237.766 294 192.168.1.4 TCP_MISS/503 2419 GET http://www.google.com/ – DIRECT/209.85.231.104 text/html 1249380328.894 290 192.168.1.4 TCP_MISS/503 2468 GET http://www.google.com/ – DIRECT/209.85.231.104 text/html 1249380437.333 184 192.168.1.4 TCP_MISS/503 2350 GET http://www.yahoo.com/ – DIRECT/69.147.76.15 text/html ———————————————the user gets following error: while trying to retrieve the URL http://www.yahoo.com/ The following error was encountered: Connection to 69.147.76.15 Failed. The system returned: (101) Network is unreachable [whereas, i am able to access above url / ip from server] PLEASE, HELP me resolve this issue. Reply 192 Ajit Upadhyay August 4, 2009 Hi! I have a server with eth0 (10.126.2.101) connected to my ISP (proxy 10.31.31.10:3128 with authentication ie. userid/pwd) and eth1 (192.168.1.1) connected to local network through a fast ethernet switch. The server is also a DHCP sever for local network (192.168.1.2 – 192.168.1.254). Now, I have configured squid on this server so that local netwrok PCs can access internet thorugh my server (which is behind ISP’s authenticated proxy). The detail of squid.conf is listed below: ——————– acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 10.0.0.0/8 acl localnet src 172.16.0.0/12 acl localnet src 192.168.1.1 161 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Safe_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl purge method PURGE acl CONNECT method CONNECT access_log /var/log/squid/access.log acl plasma_net src 192.168.1.2 acl plasma_net src 192.168.1.3 acl plasma_net src 192.168.1.4 acl plasma_net src 192.168.1.5 http_access allow plasma_net acl lan src 10.126.2.101 192.168.1.1 http_access allow localhost http_access allow lan http_access allow all http_access allow localnet http_access deny all acl ftp proto FTP http_access allow ftp http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_reply_access allow all icp_access allow all icp_access allow localnet icp_access deny all htcp_access allow localnet htcp_access deny all http_port 192.168.1.1:3128 transparent 162 hierarchy_stoplist cgi-bin ? cache_mem 8 MB memory_replacement_policy lru cache_replacement_policy lru cache_dir ufs /var/cache/squid 100 16 256 minimum_object_size 0 KB maximum_object_size 4096 KB cache_swap_low 90 cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log emulate_httpd_log off ftp_passive on refresh_pattern ^ftp: 1440 20 10080 refresh_pattern ^gopher: 1440 0 1440 refresh_pattern (cgi-bin|\?) 0 0 0 refresh_pattern . 0 20 4320 always_direct allow all connect_timeout 2 minutes client_lifetime 1 days cache_mgr webmaster visible_hostname plasma1 icp_port 3130 error_directory /usr/share/squid/errors/English coredump_dir /var/cache/squid cache_swap_high 95 ——————When any PC on network tries to use internet, I get following error in my access.log and —————————————————— 1249380227.459 10 192.168.1.4 TCP_REFRESH_UNMODIFIED/304 259 GET webmail1…. – DIRECT/10.11.100.123 1249380237.766 294 192.168.1.4 TCP_MISS/503 2419 GET http://www…/ – DIRECT/209.85.231.104 text/html 1249380328.894 290 192.168.1.4 TCP_MISS/503 2468 GET http://www…./ – DIRECT/209.85.231.104 text/html 1249380437.333 184 192.168.1.4 TCP_MISS/503 2350 GET http://www…/ – DIRECT/69.147.76.15 text/html ———————————————the user gets following error: while trying to retrieve the URL http://www…./ The following error was 163 encountered: Connection to 69.147.76.15 Failed. The system returned: (101) Network is unreachable [whereas, i am able to access above url / ip from server] PLEASE, HELP me resolve this issue. Reply 193 Ajit Upadhyay August 4, 2009 further info: OS: openSuSE 11.0 Also, I have disabled firewall, as of now (MY ISP is highly secure / protected). Reply 194 Ajit Upadhyay August 4, 2009 I have also set in squid.conf ———————– cache_peer 10.31.31.10 parent 3128 0 no-query prefer_direct off ———————– where my ISP’s proxy is 10.31.31.10:3128 but the error still continues. Reply 195 Javier August 17, 2009 Hello worot exactly the script and got a problem I can not see my etho that connect with my local lan. How I can delete this script javier Reply 196 Javier August 18, 2009 After I complete the script I got a problem I can see the eth0 that is connected to my local network Reply 197 Marc August 18, 2009 164 Hello, I’m using a transparent proxy bridge, and I noticed that a download never completes and it always cuts, as to connection to the server is reset ! I’m using these rules in the firewall : ebtables -t broute -A BROUTING -p IPv4 –ip-protocol 6 –ip-destinationport 80 -j redirect –redirect-target ACCEPT iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 80 -j REDIRECT – to-port 8080 iptables -t nat -A PREROUTING -i eth1 -p tcp –dport 80 -j REDIRECT – to-port 8080 iptables -t nat -A PREROUTING -i br0 -p tcp –dport 80 -j REDIRECT – to-port 8080 Where port 8080 is the dansguardian port for url filtering. Any idea why the connection resets ? It’s like a tcp reset is being done. Thanks. Reply 198 jac August 18, 2009 Ehy, pay attention kotnik’s sed trick delete ALL rows that CONTAIN a #, not just that START with # Reply 199 John September 3, 2009 Hi, I am running a transparent bridge with squid and dansguardian. I noticed that a download can never complete and I get the message “The connection with the server was reset” as soon as the download starts. Very small files ( < 1MB ) are hardly able to finish. Browsing is fine, the problem is only with the downloads and they always cut. Anybody's having a similar problem with a transparent bridge ? Appreciate your help solving this critical matter. Thanks. John Reply 200 theleftfoot September 3, 2009 hey guys, 165 i hope someone can help me out….i’ve got problems withe the following two steps: Save shell script. Execute script so that system will act as a router and forward the ports: # chmod +x /etc/fw.proxy # /etc/fw.proxy # service iptables save # chkconfig iptables on Start or Restart the squid: # /etc/init.d/squid restart # chkconfig squid on it doesn’t work! got these error test:/ # chmod +x /etc/fw.proxy test:/ # /etc/fw.proxy test:/ # service iptables save [b]service: no such service iptables[/b] test:/ # can someone help me out? cheers raffa Reply 201 Anant Patel September 18, 2009 hello!!! my collage server blocked many ports like 3128,8822,3127,8125,8130…so i cant access net..i have to use only collage provided net…what can i do?? they stop also ports in utorrent… plz help me.. thank u.. Reply 202 safdar azam September 24, 2009 hello. i am using Linux redhat version 3 and i have two lan port both are configured so i want to share my internet connection to winbee thin client. tell me how can connect with thinclient. plz i am witing Reply 166 203 Stolz October 7, 2009 AFAIK, the rule “iptables -A OUTPUT -o lo -j ACCEPT” is redundant because the default policy rule “iptables -P OUTPUT ACCEPT” already allows all outgoing traffic in all interfaces Reply 204 Baswaraj Ramshette November 13, 2009 Hi, I have followed whatever steps you have given in this article regarding transparent proxy configuration , I did everything according to your article I am getting following error please help me /etc/init.d/squid restart Stopping squid: 2009/11/13 12:42:28| parseConfigFile: line 4519 unrecognized: ‘httpd_accel_host virtual’ 2009/11/13 12:42:28| parseConfigFile: line 4520 unrecognized: ‘httpd_accel_port 80′ 2009/11/13 12:42:28| parseConfigFile: line 4521 unrecognized: ‘httpd_accel_with_proxy on’ 2009/11/13 12:42:28| parseConfigFile: line 4522 unrecognized: ‘httpd_accel_uses_host_header on’ . [ OK ] Starting squid: . [ OK ] On client side The requested url could not be retrive . Reply 205 Jeffry November 25, 2009 I need help, I use Ubuntu Jaunty 9.04, want to configure Squid, and everyting is okey, cause I took a proxy 1.1.1.1:3128 in every browser. but if i want to make the squid being transparent. i still get nothing. all i do is just put transparent next http_port 3128 . and few configuration like above. then put iptables like as usuall.. iptables -t nat -A PREROUTING -p tcp –dport 80 -j REDIRECT –to-port 3128 and in ubuntu, the iptables version is 1.1.4.1 please advice… my hair become “fall season” :`( Reply 167 206 e December 9, 2009 how do i get on myspace from school Reply 207 Live December 15, 2009 Does anybody’s question ever get answered in this tutorial? This tutorial is obsolete in later versions of SQUID! Reply 208 Sye MUshtaq Ahmed December 24, 2009 Hello, Really the guide is wonderful and it worked 100% for me and even the clients using it are amazed with its speed. But there is one problem now !!! When client access Email, like yahoo and hotmail any others in i.e: massege will show after few seconds this page can’t be dis[layed plz solve my problem ASAP REGARDS Reply 209 Sam December 31, 2009 Hello, I facing a problem when setup the server as router. My client can ping to eth 1 and eth 0 succesfully. However the client can’t browse internet through proxy servy (eth 0). For your information, i setup the proxy server follow exactly what was writen hre. May i know what is the problem? Thanks ! Reply 210 Devinka January 16, 2010 HI , Thanks for the howto . it works fine . Reply 211 Lalit Kumar January 16, 2010 Hi All, 168 i have a issue with my transparent squid server it is working transparet for it’s own subnet or vlan systems . Like my sqy=uid server ip is 172.16.110.24 and it;s working fine for a system with ip 172.16.110.22 . but it is not working transparently for other systems like 172.16.119.37 and 172.16.122.43 i add acl mynet src 172.16.110.0 /24 172.16.119.0/24 http_access allow mynet . but it is working only for same vlan systems why ? can anyone help me out in this issue Reply 212 gopi chand January 19, 2010 where can I add the following line in squid.conf . please help me anybody .the line are httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on acl lan src 192.168.1.1 192.168.2.0/24 http_access allow localhost http_access allow lan Reply 213 Kartik Vashishta February 4, 2010 So I have to enable IP rotuing for this to work, what is the command to do that…tell eth0 to route to eth1? Reply 214 bobzi February 12, 2010 Dear LINUXTITLI I configured Squid 2.5 with your configuration. Everything is fine but HTTPS sites don’t accept request. I’ve tried several times to open HTTPS (SSL Port) in iptables by some different commands, however I still have problem. On the other hands, when I set Proxy in Internet Option tab, clients can open Secure sites, when I erase the proxy setting only the secure site has a problem to login. And also I need setup clients without 169 any setting in browser for some reasons. Actually I have a serious problem in this setting. I need some help. Could you please give a solution?! Dear LINUXTITLI or somebody else. I will be grateful. Many thanks Reply 215 Fredl February 12, 2010 Hi, kotnik’s magic filter in posting #4 ignores the greediness of sed. His code will hide any lines containing a ‘#’ (and following comment) somewhere in them. This will reflect an uncomplete setup. Better use this grep-only command: grep -vE ‘^#|^*$’ /etc/squid3/squid.conf To all the help-seekers here: Better try a suitable forum for your questions, a blog like this one is far from being a perfect platform for helping with configuration mistakes. Regards, Fredl. Reply 216 Fredl February 12, 2010 NB: Sorry, forgot to say “thank you” for the fine tutorial, LINUXTITLI! :) @Lalit Kumar: try acl mynet src 172.16.110.0/24 172.16.119.0/24 172.16.122.0/24 or simplier (but less restrictive): acl mynet src 172.16.0.0/16 Most of the others here have some typos, too… Reply 217 Manoj February 15, 2010 I configured RHEL5 squid server as an proxy server in windows envirnoment, it give me an problem for outlook express & for Ms outlook that users on windows side are not able to send & recieve their e-mails. However i have open the safe ports & iptable rule’s. 170 Also, i want to configure an squid server as an proxy server in such way that some of the users are not able to access the specific web sites but some users are able to access same websites. While users get their IP’s from DHCP server. Reply 218 saltio May 12, 2010 outlook express & for Ms outlook that users on windows side are not able to send & recieve their e-mails. What are the commands to open the safe ports & iptable rule’s. Thanks for the setup – this will save alot of time. Reply 219 vikram February 24, 2010 I have always noticed one thing, while going for transparent squid or IP MASQUERADING, i always have to keep by named service on. and specify the DNS ip settings in client. Is dns necessary. because we dont need that in normal squid (non-transparent). Kindly Guide Reply 220 bezt March 4, 2010 can U tell me how i configure my iptables to non-transparen proxy Thx b4 regards Reply 221 Sharon March 9, 2010 Hi i am very bad at Linux and failed many a time, but want to setup a similar system including web content filtering using dansgaurdian package. This system is intented for use in non-profit organisations with which i am associated. If somebody could spare some time to setup this system please mail me back at my email address [email protected] Best Regards, Sharon. Reply 222 Anil March 19, 2010 171 I want to setup squid proxy servers ( three ) with one gateway server. I know it can be done by linux LVS. can somebody give me detailed howto or step by step guide to setup this. Thanks in advance Reply 223 Nick April 9, 2010 Please Help, i have installed and configured squid-3.1.1 on open suse 10.2 but and it starts well but for some reason client machines cant access internet through squid, I have one LAN port connected to the switch and i want all computers to use it as a proxy server with port 8080. Do i need to install Apache as well?..Below are the configurations acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 acl mrc src 10.0.1.0/24 acl localnet src 10.0.0.0/24 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports 172 http_access allow safe_ports http_access deny CONNECT !SSL_ports http_access allow localnet http_access allow localhost http_access allow mrc http_access deny all icp_access allow localnet icp_access deny all htcp_access allow localnet http_port 3128 http_port 8080 hierarchy_stoplist cgi-bin ? cache_dir ufs /usr/local/squid/var/cache 1000 16 256 access_log /usr/local/squid/var/logs/cache.log squid cache_access_log /usr/local/squid/var/logs/access.log squid cache_store_log /usr/local/squid/var/logs/store.log squid cache_store_log /usr/local/squid/var/logs/store.log squid coredump_dir /usr/local/squid/var/cache coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 cache_mgr root visible_hostname mskproxy.mrcuganda.org icp_port 3130 always_direct allow all cache_effective_user squid cache_effective_group squid htcp_port 4827 cache_mgr [email protected] Reply 224 ammar ali April 13, 2010 i need all proxy seting Reply 225 Sarmed Rahman April 18, 2010 a million thanks ^_^ 173 Reply 226 Prasad May 13, 2010 thanks for the info. i was really in need of this. Reply 227 hmtum01 May 19, 2010 how can i block user according to the mac address filtering in trasparent squid proxy. which is the version of that squid Reply 228 rocky May 31, 2010 thanks Reply 229 Alex Y. Telkov (Russia) June 2, 2010 Thank a lot! I have a problem with Total Commander while users from local net try to access FTP resources. I have classic architecture in local HQ lan “LAN — Linux-router — CISCO 871-k9 — Internet”. I apologize, You approach in solving FTPport-error problem helps me to solve my situation. If my “server-under-construction” be turned on at moment, I start to emplement You solution remotely immideatly! :) Reply 230 Pradip Raut Chhetri June 6, 2010 I have done everything, 3 easy steps for transparent proxy but every time i restart the squid, i m gettin error regarding followin’:httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on Help me, Do i have to set up httpd server before configuring your “3 easy steps transparent proxy”. Thank YOU 174 Reply 231 gbrane June 14, 2010 Important !!!!! for Ubuntu users !!! in /etc/sysctl.d/10-network-security.conf must be comment !! net.ipv4.conf.default.rp_filter=1 net.ipv4.conf.all.rp_filter=1 i lost one month to solve this problem !!!!!! Reply 232 DEEPAK June 30, 2010 any budy help for the linux firewall configure this is first time using please help how to configure give some link either commond send. Reply 233 Vijith P A August 31, 2010 Hai Guyz, I Configured Proxy server with Transparent in above mentioned way expect this code httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on When i trying to access internet in client side it will showing error message “The following error was encountered while trying to retrieve the URL: / Invalid URL” Actually i type http://www.google.com Error message of /var/log/squid3/access.log file is 1283269708.780 0 192.168.1.121 NONE/400 1951 GET /firefox – NONE/- text/html Reply 234 tendy September 9, 2010 Will anyone ever give a solution to this problem??? httpd_accel_host virtual httpd_accel_port 80 175 httpd_accel_with_proxy on httpd_accel_uses_host_header on Help me, Do i have to set up httpd server before configuring your “3 easy steps transparent proxy”. Reply Leave a Comment Name * E-mail * Website Notify me of followup comments via e-mail. Submit 92 0 Tagged as: /etc/squid/squid.conf, httpd accel host, httpd accel port, httpd accel uses host header, httpd accel with proxy, httpd accelerator, Iptables, proxy httpd, router server, squid configuration, squid server, transparent proxy Previous post: MySQL Database Runs 60 to 90 Percent Faster on Solaris 10 Than on Red Hat Linux Next post: Interview: Red Hat’s open source scholarship challenge Sign up for our daily email newsletter: Enter your em Nixcraft-LinuxFre en_US Sign Up 176 Search 00216591707659 FORID:11 UTF-8 Search 192.168.1.235:22 Sponsored links Save now with the best deals from CDW. Click here to save on desktops, notebooks, LCDs and more. Shop now! Related Posts o Squid Proxy Server Limit the number of simultaneous Web connections from a client with maxconn ACL o Install Squid Proxy Server on CentOS / Redhat enterprise Linux 5 o nixCraft FAQ Roundup ~ Nov 1, 2007 ©2004-2010 nixCraft. All rights reserved | Privacy Policy | Terms of Service Advertise | Questions or Comments | Copyright Info | Sitemap z-computer-z All About Computer and Internet Tuesday, October 12, 2010 HOME LOGIN 177 Visitors Make Transparent Proxy With Squid on Linux (Ubuntu 9.10) AUTHOR: ZULIAN | POSTED AT: MONDAY, JANUARY 18, 2010 | FILED UNDER: CREATING TRANSPARENT PROXY WITH SQUID AND IPTABLES, SETUP SQUID UBUNTU 9.10, UBUNTU SERVER 9.10 SQUID TRANSPARENT To make a transparent proxy you need to redirect all port that you want to squid port. This article will guide you to make a transparent proxy server on Ubuntu 9.10. First thing you need to do are installing squid on your computer that will become a proxy server. You can install it with apt-get command, like $ Then sudo you need to apt-get configure $ Add this your : install squid. Open your squid sudo this line on http_port tag squid file configuration : /etc/squid/squid.conf (under “# Squid normally listens to port 3128") : http_port 3128 transparent And then make your own rules. In this example I will only use the minimal configuration. Add this line to define the network (LAN) and permit the network use the squid proxy : 178 acl src LAN 192.168.2.0/24 http_access allow LAN icp_access allow LAN Save the squid configuration file and restart the squid to make the changes take effect. You can restart squid $ sudo with this command service squid : restart That minimal configuration will make squid run, But not transparent yet. To make it transparent you need to configure your iptables. You need to make a iptables configuration file on your gateway, like this (assume $ your proxy server sudo then write is on IP 192.168.2.1) vi this : /etc/iptables.conf on that file : *filter :INPUT ACCEPT :FORWARD [4:212] ACCEPT :OUTPUT [0:0] ACCEPT [4:172] COMMIT # # Completed Generated by on iptables-save Sat May v1.4.0 on 30 Sat May 15:59:04 30 15:59:04 2009 2009 *nat :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [1:52] :OUTPUT ACCEPT [1:52] [2:120] -A PREROUTING -s 192.168.2.0/24 -p tcp -m tcp --dport 80 -j DNAT --to-destination 179 192.168.2.1:3128 [0:0] -A PREROUTING -s 192.168.2.0/24 -p tcp -m tcp --dport 81 -j DNAT --to-destination 192.168.2.1:3128 [0:0] -A PREROUTING -s 192.168.2.0/24 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 192.168.2.1:3128 [0:0] -A PREROUTING -s 192.168.2.0/24 -p tcp -m tcp --dport 3128 -j DNAT --to-destination 192.168.2.1:3128 [0:0] -A POSTROUTING -s 192.168.2.0/24 -j MASQUERADE COMMIT Save your iptables configuration file. and then make another file so your iptables will always load when your computer $ sudo Write on boot. Make The vi file /etc/init.d/iptables that file : #! /bin/bash echo 1 iptables-restore Save $ $ : the file, sudo and chmod update-rc.d > /proc/sys/net/ipv4/ip_forward < /etc/iptables.conf make it +x iptables executable : /etc/init.d/iptables defaults You may need to reboot your computer to make it work. Well done, now your transparent proxy is ready to use. 180 0 comments: Post a Comment NEWER POST OLDER POST HOME Transparent Caching/Proxy From Squid User's Guide Jump to: navigation, search Transparent caching is the art of getting HTTP requests intercepted and processed by the proxy without any form of configuration in the browser (neither manual or automatic configuration). This involves firewalling & routing rules to have packets with destination port 80 forwarded to the proxy port, and some Squid configuration to tell Squid that it's being called in this manner which differs slightly from normal proxied requests. Transparent Cache/Proxy with Squid version prior to 2.6 Prior to Squid 2.6 there was no quick and direct method of enabling Squid to be a transparent proxy. This has since changed in the latest stable version of Squid and it is highly recommended that the latest stable version of Squid be used in preference to any previous edition, unless there exists an overriding reason to use an older release of Squid. In older versions of Squid, transparent proxy was almost a "hack", achieved through the use of the httpd_accel options. Transparent proxy can be achieved in these versions of Squid by appending/uncommenting the following four lines of code in the squid.conf file: httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on 181 The four lines inform Squid to run as a transparent proxy, below is a list of what each individual line acheives: httpd_accel_host virtual - This tells the accelerator to work for any URL that it is given (the usual usage for the accelerator is to inform it which URL it must accelerate) httpd_accel_port 80 - Informs the accelerator which port to listen to, the accelerator is a very powerful tool and much of its usage is beyond the scope of this section, the only knowledge required here is that this setting ensures that the transparent proxy accesses the websites we wish to browse via the correct HTTP port, where the standard is port 80. httpd_accel_with_proxy on - By default when Squid has its accelerator options enabled it stops being a cache server, to reinstate this (this is obviously important as the whole purpose behind this configuration is a cache server) we turn the httpd_accel_with_proxy option on httpd_accel_uses_host_header on - In a nutshell with this option turned on Squid is able to find out which website you are requesting Transparent Cache/Proxy with Squid version 2.6 and beyond In this version of Squid, transparent proxy has been given a dedicated parameter - the transparent parameter -- and it is given as an argument to the http_port tag within the squid.conf file, as the following example demonstrates: http_port 192.168.0.1:3128 transparent In this example, the IP address that Squid is set to listen to is 192.168.0.1 using port number 3128, and your firewall rules is already set up to transparently intercept port 80 and forward to this port. The transparent option is then used to inform squid that this IP and port should be listened to as a transparent proxy. This completes the configuration of Squid as a transparent proxy server (yes that's right, all done! (apart from the ACL rules and generic settings that you have should have set by now after reading the sections of this guide prior to this one)). Please note that to use this then you will need to compile in the necessary feature into your Squid binary. Please read the information on transparent proxy in the Installing Squid section for more details on this. Do not be alarmed by a Squid binary recompile at this stage, Squid should not overwrite your edited squid.conf file but make sure to back it up just in case! 182 For a full solution for Squid > 2.6, including Iptables, you can see this article: http://www.lesismore.co.za/squid3.html Retrieved from "http://www.deckle.co.za/squid-users-guide/Transparent_Caching/Proxy" Views Page Discussion View source History Personal tools Log in / create account Navigation Main Page Community portal Current events Recent changes Random page Help Search Special:Search Toolbox What links here Related changes Special pages Printable version Permanent link Go Search 183 This page was last modified on 17 December 2009, at 13:08. This page has been accessed 142,322 times. Content is available under GNU Free Documentation License 1.2. Privacy policy About Squid User's Guide Disclaimers Squid transparent proxy tomclegg.net Posted May 5, 2003 To set up squid as a transparent proxy using FreeBSD... Diary Examples 256-router adzap cacti-adodbphp4 debian-quota diskonmodule dynip ezmlm-linux fbsdhabits freebsdclone maildirpop3dawfulhak mandy md mrtg net-snmp nodefaultroute oracle9i oracle9i-bsd5 Install FreeBSD. Set up NAT or whatever, and install this machine as your gateway. Log in as root. Install squid from ports. cd /usr/ports/www/squid && make install Move cache directory to /tmp. mkdir /tmp/squid mv /usr/local/squid/cache /tmp/squid/cache ln -s /tmp/squid/cache /usr/local/squid/ Set up a "squid" user account. 184 oracle9i-client oracle9i-nat php-cgi phpcommandline php-image php-kics phpmini_httpd pinouts plesk-symlinkphp pxe qmail-linux qmail-qfilter racoonsonicwall redundantvpn rewriterule seahorseworkaround setting-localefailed smalldog snmpv3-cacti spamassassin >squidtproxy< supfile suse73 svc-nmbd svc-smbd svc-smtpd toyotastereo vm vn-file wmp-invalid xcode- pw useradd squid -d /nonexistent -s /usr/bin/true chown -R squid:squid /tmp/squid/cache /usr/local/squid/logs Edit /usr/local/etc/squid/squid.conf # diff squid.conf.default squid.conf 474a475 > cache_mem 48 MB 524a526 > maximum_object_size_in_memory 128 KB 670a673 > cache_dir ufs /usr/local/squid/cache 200 64 128 982a986 > #redirect_program /usr/local/libexec/adzap 1311a1316 > request_body_max_size 4 MB 1753c1758,1759 185 remote-install xen-eth0renamed xen3-ubuntudapper Hire Tom Mostly Mozart Patches School Scrapbook Software Telephones < #http_access allow our_networks --> acl our_networks src 10.129.0.0/16 199.60.150.0/24 > http_access allow our_networks 1937a1944 > cache_mgr [email protected] colocation comments davidireland edsgranola faq funsites goodlooking goodmovies houserules liberating resume resume2 scratch shopping snacks todo university warisbogus 1953a1961,1962 > cache_effective_user squid > cache_effective_group squid 1963a1973 > visible_hostname YOUR.HOST.NAME.HERE 2051a2062,2063 > httpd_accel_host virtual > httpd_accel_port 0 2080a2093 > httpd_accel_with_proxy on 186 2100a2114 > httpd_accel_uses_host_header on 2540a2555,2556 > header_access Via deny all > header_access X-Forwarded-For deny all Create the cache directories. squid -z Start squid. /usr/local/etc/rc.d/squid.sh start Enable firewall and transparent proxy support in kernel. Example using FreeBSD 4: cd /usr/src/sys/i386/conf cp -i GENERIC MYKERNEL cat <<EOF >>MYKERNEL options IPFIREWALL options IPFIREWALL_FORWARD options IPFIREWALL_DEFAULT_TO_ACCEPT options IPDIVERT # if you intend to use NAT 187 EOF config MYKERNEL cd ../../compile/MYKERNEL make depend && make && make install && reboot Make sure squid is listening on port 3128. ps axw | grep squid netstat -a -n | grep -w 3128 Redirect all HTTP traffic passing through the machine to squid. echo firewall_type=/etc/firewall.local >>/etc/rc.conf cat <<EOF >>/etc/firewall.local fwd YOUR.IP.ADDR.HERE,3128 tcp from not me to any 80 EOF Watch the log file as you load web pages. tail -f /usr/local/squid/logs/access.log 188 Transparent Proxy with Linux and Squid mini-HOWTO Daniel Kiracofe v1.15, August 2002 This document provides information on how to setup a transparent caching HTTP proxy server using only Linux and squid. 1. Introduction 1.1 Comments 1.2 Copyrights and Trademarks 1.3 #include <disclaimer.h> 2. Overview of Transparent Proxying 2.1 Motivation 2.2 Scope of this document 2.3 HTTPS 2.4 Proxy Authentication 189 3. Configuring the Kernel 4. Setting up squid 5. Setting up iptables (Netfilter) 6. Transparent Proxy to a Remote Box 6.1 First method (simpler, but does not work for some esoteric cases) 6.2 Second method (more complicated, but more general) 6.3 Method One: What if iptables-box is on a dynamic IP? 7. Transparent Proxy With Bridging 8. Put it all together 9. Troubleshooting 10. Further Resources 1. Introduction 1.1 Comments Comments and general feedback on this mini HOWTO are welcome and can be directed to its author, Daniel Kiracofe, at [email protected]. 190 1.2 Copyrights and Trademarks Copyright 2000-2002 by Daniel Kiracofe This manual may be reproduced in whole or in part, without fee, subject to the following restrictions: The copyright notice above and this permission notice must be preserved complete on all complete or partial copies Translation to another language is permitted, provided that the author is notified prior to the translation. Any derived work must be approved by the author in writing before distribution. If you distribute this work in part, instructions for obtaining the complete version of this manual must be included, and a means for obtaining a complete version provided. Small portions may be reproduced as illustrations for reviews or quotes in other works without this permission notice if proper citation is given. Exceptions to these rules may be granted for academic purposes: Write to the author and ask. These restrictions are here to protect us as authors, not to restrict you as learners and educators. Any source code (aside from the SGML this document was written in) in this document is placed under the GNU General Public License, available via anonymous FTP from the GNU archive. 1.3 #include <disclaimer.h> No warranty, expressed or implied, etc, etc, etc... 2. Overview of Transparent Proxying 2.1 Motivation In ``ordinary'' proxying, the client specifies the hostname and port number of a proxy in his web browsing software. The browser then makes requests to the proxy, and the proxy forwards them to the origin servers. This is all fine and good, but sometimes one of several situations arise. Either 191 You want to force clients on your network to use the proxy, whether they want to or not. You want clients to use a proxy, but don't want them to know they're being proxied. You want clients to be proxied, but don't want to go to all the work of updating the settings in hundreds or thousands of web browsers. This is where transparent proxying comes in. A web request can be intercepted by the proxy, transparently. That is, as far as the client software knows, it is talking to the origin server itself, when it is really talking to the proxy server. (Note that the transparency only applies to the client; the server knows that a proxy is involved, and will see the IP address of the proxy, not the IP address of the user. Although, squid may pass an X-Forwarded-For header, so that the server can determine the original user's IP address if it groks that header). Cisco routers support transparent proxying. So do many switches. But, (surprisingly enough) Linux can act as a router, and can perform transparent proxying by redirecting TCP connections to local ports. However, we also need to make our web proxy aware of the affect of the redirection, so that it can make connections to the proper origin servers. There are two general ways this works: The first is when your web proxy is not transparent proxy aware. You can use a nifty little daemon called transproxy that sits in front of your web proxy and takes care of all the messy details for you. transproxy was written by John Saunders, and is available from or your local metalab mirror. transproxy will not be discussed further in this document. ftp://ftp.nlc.net.au/pub/linux/www/ A cleaner solution is to get a web proxy that is aware of transparent proxying itself. The one we are going to focus on here is squid. Squid is an Open Source caching proxy server for Unix systems. It is available from www.squid-cache.org Alternatively, instead of redirecting the connections to local ports, we could redirect the connections to remote ports. This is discussed in the Transparent Proxy to a Remote Box section. Readers interested in this approach should skip down to that section. Readers interested on doing everything on one box can safely ignore that section. 2.2 Scope of this document This document will focus on squid version 2.4 and Linux kernel version 2.4, the most current stable releases as of this writing (August 2002). It should also work 192 with most of the later 2.3 kernels. If you need information about earlier releases of squid or Linux, you can find some earlier documents at http://users.gurulink.com/transproxy/. Note that this site has moved from it's previous location. If you are using a development kernel or a development version of squid, you are on your own. This document may help you, but YMMV. Note that this document focuses only on HTTP proxing. I get many emails asking about transparent FTP proxying. Squid can't do it. Now, allegedly a program called Frox can. I have not tried this myself, so I cannot say how well it works. You can find it at http://www.hollo32.fsnet.co.uk/frox/. I only focus on squid here, but Apache can also function as a caching proxy server. (If you are not sure which to use, I recommend squid, since it was built from the ground up to be a caching proxy server, Apache's caching proxy features are more of afterthought additions to an already existing system.) If you want use Apache instead of squid: follow all the instructions in this document that pertain to the kernel and iptables rules. Ignore the squid specific sections, and instead look at http://lupo.campus.uniroma2.it/progetti/mod_tproxy/ for source code and instructions for a transparent proxy module for Apache (thanks to Cristiano Paris ([email protected]) for contributing this). 2.3 HTTPS Finally, as far as transparently proxing HTTPS (e.g. secure web pages using SSL, TSL, etc.), you can't do it. Don't even ask. For the explanation, do a search for 'man-in-the-middle attack'. Note that you probably don't really need to transparently proxy HTTPS anyway, since squid can not cache secure pages. 2.4 Proxy Authentication You cannot use Proxy Authentication transparently. See the Squid FAQ for (slightly) more details. 3. Configuring the Kernel First, we need to make sure all the proper options are set in your kernel. If you are using a stock kernel from your distribution, transparent proxying may or may not 193 be enabled. If you are unsure, the best way to tell is to simply skip this section, and if the commands in the next section give you weird errors, it's probably because the kernel wasn't configured properly. If your kernel is not configured for transparent proxying, you will need to recompile. Recompiling a kernel is a complex process (at least at first), and it is beyond the scope of this document. If you need help compiling a kernel, please see The Kernel HOWTO The options you need to set in your configuration are as follows (Note: if you prefer modules, some (but not all) of these can be built as modules. Luckily, everything that is not modularizable is probably got in your kernel anyway.) Under General Setup o Networking support o Sysctl support Under Networking Options o Network packet filtering o TCP/IP networking Under Networking Options -> IP: Netfilter Configuration o Connection tracking o IP tables support o Full NAT o REDIRECT target support Under File Systems o /proc filesystem support You must say NO to ``Fast switching'' under Networking Options. Once you have your new kernel up and running, you may need to enable IP forwarding. IP forwarding allows your computer to act as a router. Since this is not what the average user wants to do, it is off by default and must be explicitly enabled at run-time. However, your distribution might do this for you already. To check, do ``cat /proc/sys/net/ipv4/ip_forward''. If you see ``1'' you're good. Otherwise, do ``echo '1' > /proc/sys/net/ipv4/ip_forward''. You will then want to add that command to your appropriate bootup scripts (depending on your 194 distribution, these may live in /etc/rc.d, /etc/init.d, or maybe somewhere else entirely). 4. Setting up squid Now, we need to get squid up and running. Download the latest source tarball from www.squid-cache.org. Make sure you get a STABLE version, not a DEVEL version. The latest as of this writing was squid-2.4.STABLE4.tar.gz. Note that AFAIK, you must have squid-2.4 for linux kernel 2.4. The reason is that the mechanism by which the process determines the original destination address has changed from linux 2.2, and only squid-2.4 has this new code in it. (For those of you who are interested, previously the getsockname() call was hacked to provide the original destination address, but now the call is getsockopt() with a level of SOL_IP and an option of SO_ORIGINAL_DST). Now, untar and gunzip the archive (use ``tar -xzf <filename>''). Run the autoconfiguration script and tell it to include netfilter code (``./configure --enablelinux-netfilter''), compile (``make'') and then install (``make install''). Now, we need to edit the default squid.conf file (installed to /usr/local/squid/etc/squid.conf, unless you changed the defaults). The squid.conf file is heavily commented. In fact, some of the best documentation available for squid is in the squid.conf file. After you get it all up and running, you should go back and reread the whole thing. But for now, let's just get the minimum required. Find the following directives, uncomment them, and change them to the appropriate values: httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on Next, look at the cache_effective_user and cache_effective_group directives. Unless the default nobody/nogroup has been created on your system (AFAIK, it is not created out of the box on many popular distributions, including RH7.1), you'll either need to create those, or create another username/group for squid to run under. I strongly recommend that you create a username/group of squid/squid and run under that, but you could use any existing user/group if you want. 195 Finally, look at the http_access directive. The default is usually ``http_access deny all''. This will prevent anyone from accessing squid. For now, you can change this to ``http_access allow all'', but once it is working, you will probably want to read the directions on ACLs (Access Control Lists), and setup the cache such that only people on your local network (or whatever) can access the cache. This may seem silly, but you should put some kind of restrictions on access to your cache. People behind filtering firewalls (such as porn filters, or filters in nations where speech is not very free) often ``hijack'' onto wide open proxies and eat up your bandwidth. Initialize the cache directories with ``squid -z'' (if this is a not a new installation of squid, you should skip this step). Now, run squid using the RunCache script in the /usr/local/squid/bin/ directory. If it works, you should be able to set your web browser's proxy settings to the IP of the box and port 3128 (unless you changed the default port number) and access squid as a normal proxy. For additional help configuring squid, see the squid FAQ at 5. Setting up iptables (Netfilter) iptables is a new thing for Linux kernel 2.4 that replaces ipchains. If your distribution came with a 2.4 kernel, it probably has iptables already installed. If not, you'll have to download it (and possibly compile it). The homepage is netfilter.samba.org. You make be able to find binary RPMs elsewhere, I haven't looked. For the curious, there is plenty of documentation on the netfilter site. To set up the rules, you will need to know two things, the interface that the to-beproxied requests are coming in on (I'll use eth0 as an example) and the port squid is running on (I'll use the default of 3128 as an example). Now, the magic words for transparent proxying: iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 You will want to add the above commands to your appropriate bootup script under /etc/rc.d/. Readers upgrading from 2.2 kernels should note that this is the only command needed. 2.2 kernels required two extra commands in order to prevent forwarding loops. The infastructure of netfilter is much nicer, and only this command is needed. 196 6. Transparent Proxy to a Remote Box Now, the question naturally arises, if we can do all this nifty stuff redirecting HTTP connections to local ports, could we do the same thing but to a remote box (e.g., the machine with squid running is not the same machine as iptables is running on). The answer is yes, but it takes a little different magic words. If you only want to redirect to the local box (the normal case), skip this section. For the purposes of example commands, let's assume we have two boxes called squid-box and iptables-box, and that they are on the network local-network. In the commands below, replace these strings with the actual IP addresses or name of your machines and network. I will present two different approaches here. 6.1 First method (simpler, but does not work for some esoteric cases) First, we need to machine that squid will be running on, squid-box. You do not need iptables or any special kernel options on this machine, just squid. You *will*, however, need the 'http_accel' options as described above. (Previous version of this HOWTO suggested that you did not need those options. That was a mistake. Sorry to have confused people...) Now, the machine that iptables will be running on, iptables-box You will need to configure the kernel as described in section 3 above, except that you don't need the REDIRECT target support). Now, for the iptables commands. You need three: iptables -t nat -A PREROUTING -i eth0 -s ! squid-box -p tcp --dport 80 -j DNAT -to squid-box:3128 iptables -t nat -A POSTROUTING -o eth0 -s local-network -d squid-box -j SNAT --to iptables-box iptables -A FORWARD -s local-network -d squid-box -i eth0 -o eth0 -p tcp -dport 3128 -j ACCEPT The first one sends the packets to squid-box from iptables-box. The second makes sure that the reply gets sent back through iptables-box, instead of directly to the client (this is very important!). The last one makes sure the iptables-box will forward the appropriate packets to squid-box. It may not be needed. YMMV. Note 197 that we specified '-i eth0' and then '-o eth0', which stands for input interface eth0 and output interface eth0. If your packets are entering and leaving on different interfaces, you will need to adjust the commands accordingly. Add these commands to your appropriate startup scripts under /etc/rc.d/ (Thanks to Giles Coochey for help writing this section). 6.2 Second method (more complicated, but more general) Our first shot at this works good, but there is a minor drawback in that HTTP/1.0 connections without the Host header do not get handled properly. Connections that are fully or partially HTTP/1.1 compliant work fine. As most modern web browsers send the Host header, this is not a problem for most people. However, some small programs or embedded devices may send only very simple HTTP/1.0 requests. If you want to support these, we'll need to do a little more work. Namely, on iptables-box we'll need the following options enabled in the kernel in addition to what was specified above: IP: advanced router IP: policy routing IP: use netfilter MARK value as routing key IP: Netfilter Configuration -> Packet mangling IP: Netfilter Configuration -> MARK target support You'll also need the iproute2 tools. Your distribution probably already has them installed, but if not, look at ftp://ftp.inr.ac.ru/ip-routing/ You'll want to use the following set of commands on iptables-box: iptables -t mangle -A PREROUTING -j ACCEPT -p tcp --dport 80 -s squid-box iptables -t mangle -A PREROUTING -j MARK --set-mark 3 -p tcp --dport 80 ip rule add fwmark 3 table 2 ip route add default via squid-box dev eth1 table 2 Note that the choice of firewall mark (3) and routing table (2) was fairly arbitrary. If you are already using policy routing or firewall marking for some other purpose, make sure you choose unique numbers here. Otherwise, don't worry about it. 198 Next, squid-box. Use this command, which should look remarkably similar to a command we've seen previously. iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 As before, add all of these commands to the appropriate startup scripts. Here is a brief explanation of how this works: in method one, we used Network Address Translation to get the packets to the other box. The result of this is that the packet gets altered. This alteration is what causes some kinds of clients mentioned above to fail. In method two, we use a magic thing called policy routing. The first thing we do is to select the packets we want. Thus, all packets on port 80, except those coming from squid-box itself, are MARKed. Then, when the kernel goes to make a routing decision, the MARKed packets aren't routing using the normal routing table that you access with the ``route'' command but with a special table. This special table has only one entry, a default gateway to squidbox. Thus, the packet is sent merrily on it's way without every having been altered. So, even HTTP/1.0 connections can be handled perfectly. (Thanks to Michal Svoboda for suggesting and helping to write this section) 6.3 Method One: What if iptables-box is on a dynamic IP? If the iptables-box is on a dynamic IP address (e.g. a dialup PPP connection, or a DHCP assigned IP address from a cable modem, etc.), then you will want to make a slight change to the above commands. Replace the second command with this one: iptables -t nat -A POSTROUTING -o eth0 -s local-network -d squid-box -j MASQUERADE This change avoids having to specify the IP address of iptables-box in the command. Since it will change often, you'd have to change your commands to reflect it. This will save you a lot of hassle. 7. Transparent Proxy With Bridging Warning, this is really esoteric stuff. If you need it, you'll know. If not, skip this section. Thanks to Lewis Shobbrook ([email protected]) for contributing to this section. 199 If you are trying to setup a transparent proxy on a Linux machine that has been configured as a bridge, you will need to add one additional iptables command to what we had in section 5. Specifically, you need to explicitly allow connections to the machine on port 3128 (or any other port squid is listening on), otherwise the machine will just forward them over to the other interface like a good little bridge. Here's the magic words: iptables -A INPUT -i interface -p tcp -d your_bridge_ip -s local-network --dport 3128 -m state --state NEW,ESTABLISHED -j ACCEPT Replacing interface with the interface that corresponds to your_bridge_ip (typically eth0 or eth1). First time bridge users should also note that you'll probably want to repeat the same command with ``3128'' replaced by ``telnet'' if you want to administer your bridge remotely. 8. Put it all together If everything has gone well so far, go to another machine, change it's gateway to the IP of the box with iptables running on it, and surf away. To make sure that requests are really being forwarded through your proxy instead of straight to the origin server, check the log file /usr/local/squid/logs/access.log 9. Troubleshooting There is one problem that occurs often enough to mention here. If you get the following error: /lib/modules/2.4.2-2/kernel/net/ipv4/netfilter/ip_tables.o init_modules: Device or resource busy Hints: insmod errors can be caused by incorrect module parameters; including invalid IO or IRQ parameters. perhaps iptables or your kernel needs to be upgraded... then you are probably running Red Hat 7.x. The folks at Red Hat, in all their wisdom, decided to load the ipchains module by default on startup. I guess this was for backwards compatibility for those who haven't learned iptables yet. However, the problem is that ipchains and iptables are mutually incompatible. Since ipchains has been secretly loaded by RH, you cannot use iptables commands. To see if this is your problem, do the command ``lsmod'' and look for the module named ``ipchains''. If you see it, that is your 200 problem. The quick fix is to execute the command ``rmmod ipchains'' before you issue any iptables commands. To permanently remove these commands from your startup scripts, the following command should work: ``/sbin/chkconfig --level 2345 ipchains off''. (Thanks to Rasmus Glud for pointing this command out to me). 10. Further Resources Should you still need assistance, you may wish to check the squid FAQ or the squid mailing list at www.squid-cache.org. You may also e-mail me at [email protected], and I'll try to answer your questions if time permits (sometimes it does, but sometimes it doesn't). Please, please, please, send the output of ``iptables -t nat -L'' and relavent portions of any configuration files in your e-mail, or else I will probably not be able to help you out much. And please make sure you've read the whole HOWTO before asking a question. Regrettably, even though this document has been translated to many different languages, I can only answer questions asked in English. Next Previous Contents 6. Transparent Proxy to a Remote Box Now, the question naturally arises, if we can do all this nifty stuff redirecting HTTP connections to local ports, could we do the same thing but to a remote box (e.g., the machine with squid running is not the same machine as iptables is running on). The answer is yes, but it takes a little different magic words. If you only want to redirect to the local box (the normal case), skip this section. For the purposes of example commands, let's assume we have two boxes called squid-box and iptables-box, and that they are on the network local-network. In the commands below, replace these strings with the actual IP addresses or name of your machines and network. I will present two different approaches here. 201 6.1 First method (simpler, but does not work for some esoteric cases) First, we need to machine that squid will be running on, squid-box. You do not need iptables or any special kernel options on this machine, just squid. You *will*, however, need the 'http_accel' options as described above. (Previous version of this HOWTO suggested that you did not need those options. That was a mistake. Sorry to have confused people...) Now, the machine that iptables will be running on, iptables-box You will need to configure the kernel as described in section 3 above, except that you don't need the REDIRECT target support). Now, for the iptables commands. You need three: iptables -t nat -A PREROUTING -i eth0 -s ! squid-box -p tcp --dport 80 -j DNAT -to squid-box:3128 iptables -t nat -A POSTROUTING -o eth0 -s local-network -d squid-box -j SNAT --to iptables-box iptables -A FORWARD -s local-network -d squid-box -i eth0 -o eth0 -p tcp -dport 3128 -j ACCEPT The first one sends the packets to squid-box from iptables-box. The second makes sure that the reply gets sent back through iptables-box, instead of directly to the client (this is very important!). The last one makes sure the iptables-box will forward the appropriate packets to squid-box. It may not be needed. YMMV. Note that we specified '-i eth0' and then '-o eth0', which stands for input interface eth0 and output interface eth0. If your packets are entering and leaving on different interfaces, you will need to adjust the commands accordingly. Add these commands to your appropriate startup scripts under /etc/rc.d/ (Thanks to Giles Coochey for help writing this section). 6.2 Second method (more complicated, but more general) Our first shot at this works good, but there is a minor drawback in that HTTP/1.0 connections without the Host header do not get handled properly. Connections that are fully or partially HTTP/1.1 compliant work fine. As most modern web browsers send the Host header, this is not a problem for most people. However, some small programs or embedded devices may send only very simple HTTP/1.0 requests. If you want to support these, we'll need to do a little more work. 202 Namely, on iptables-box we'll need the following options enabled in the kernel in addition to what was specified above: IP: advanced router IP: policy routing IP: use netfilter MARK value as routing key IP: Netfilter Configuration -> Packet mangling IP: Netfilter Configuration -> MARK target support You'll also need the iproute2 tools. Your distribution probably already has them installed, but if not, look at ftp://ftp.inr.ac.ru/ip-routing/ You'll want to use the following set of commands on iptables-box: iptables -t mangle -A PREROUTING -j ACCEPT -p tcp --dport 80 -s squid-box iptables -t mangle -A PREROUTING -j MARK --set-mark 3 -p tcp --dport 80 ip rule add fwmark 3 table 2 ip route add default via squid-box dev eth1 table 2 Note that the choice of firewall mark (3) and routing table (2) was fairly arbitrary. If you are already using policy routing or firewall marking for some other purpose, make sure you choose unique numbers here. Otherwise, don't worry about it. Next, squid-box. Use this command, which should look remarkably similar to a command we've seen previously. iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 As before, add all of these commands to the appropriate startup scripts. Here is a brief explanation of how this works: in method one, we used Network Address Translation to get the packets to the other box. The result of this is that the packet gets altered. This alteration is what causes some kinds of clients mentioned above to fail. In method two, we use a magic thing called policy routing. The first thing we do is to select the packets we want. Thus, all packets on port 80, except those coming from squid-box itself, are MARKed. Then, when the kernel goes to make a routing decision, the MARKed packets aren't routing using the normal routing table that you access with the ``route'' command but with a special table. This special table has only one entry, a default gateway to squidbox. Thus, the packet is sent merrily on it's way without every having been 203 altered. So, even HTTP/1.0 connections can be handled perfectly. (Thanks to Michal Svoboda for suggesting and helping to write this section) 6.3 Method One: What if iptables-box is on a dynamic IP? If the iptables-box is on a dynamic IP address (e.g. a dialup PPP connection, or a DHCP assigned IP address from a cable modem, etc.), then you will want to make a slight change to the above commands. Replace the second command with this one: iptables -t nat -A POSTROUTING -o eth0 -s local-network -d squid-box -j MASQUERADE This change avoids having to specify the IP address of iptables-box in the command. Since it will change often, you'd have to change your commands to reflect it. This will save you a lot of hassle. docs google:redhat Search Docs: Go utf-8 Red Hat Docs > Manuals > Red Hat Linux Manuals > Red Hat Linux 9 > Red Hat Linux 9: Red Hat Linux x86 Installation Guide Prev Chapter 3. Installing Red Hat Linux Next 3.22. Firewall Configuration Red Hat Linux offers firewall protection for enhanced system security. A firewall exists between your computer and the network, and determines which resources on your computer remote users on the network can access. A properly configured firewall can greatly increase the security of your system. 204 Figure 3-20. Firewall Configuration Choose the appropriate security level for your system. High If you choose High, your system will not accept connections (other than the default settings) that are not explicitly defined by you. By default, only the following connections are allowed: DNS replies DHCP — so any network interfaces that use DHCP can be properly configured If you choose High, your firewall will not allow the following: Active mode FTP (passive mode FTP, used by default in most clients, should still work) IRC DCC file transfers RealAudio™ Remote X Window System clients If you are connecting your system to the Internet, but do not plan to run a server, this is the safest choice. If additional services are needed, you can choose Customize to allow specific services through the firewall. Note If you select a medium or high firewall to be setup during this installation, network authentication methods (NIS and LDAP) will not work. Medium If you choose Medium, your firewall will not allow remote machines to have access to certain resources on your system. By default, access to the following resources are not allowed: Ports lower than 1023 — the standard reserved ports, used by most system services, such as FTP, SSH, telnet, HTTP, and NIS. The NFS server port (2049) — NFS is disabled for both remote severs and local clients. 205 The local X Window System display for remote X clients. The X Font server port (by default, xfs does not listen on the network; it is disabled in the font server). If you want to allow resources such as RealAudio™ while still blocking access to normal system services, choose Medium. Select Customize to allow specific services through the firewall. Note If you select a medium or high firewall to be setup during this installation, network authentication methods (NIS and LDAP) will not work. No Firewall No firewall provides complete access to your system and does no security checking. Security checking is the disabling of access to certain services. This should only be selected if you are running on a trusted network (not the Internet) or plan to do more firewall configuration later. Choose Customize to add trusted devices or to allow additional incoming services. Trusted Devices Selecting any of the Trusted Devices allows access to your system for all traffic from that device; it is excluded from the firewall rules. For example, if you are running a local network, but are connected to the Internet via a PPP dialup, you can check eth0 and any traffic coming from your local network will be allowed. Selecting eth0 as trusted means all traffic over the Ethernet is allowed, put the ppp0 interface is still firewalled. If you want to restrict traffic on an interface, leave it unchecked. It is not recommended that you make any device that is connected to public networks, such as the Internet, a Trusted Device. Allow Incoming Enabling these options allow the specified services to pass through the firewall. Note, during a workstation installation, the majority of these services are not installed on the system. DHCP 206 If you allow incoming DHCP queries and replies, you allow any network interface that uses DHCP to determine its IP address. DHCP is normally enabled. If DHCP is not enabled, your computer can no longer get an IP address. SSH Secure SHell (SSH) is a suite of tools for logging into and executing commands on a remote machine. If you plan to use SSH tools to access your machine through a firewall, enable this option. You need to have the openssh-server package installed in order to access your machine remotely, using SSH tools. Telnet Telnet is a protocol for logging into remote machines. Telnet communications are unencrypted and provide no security from network snooping. Allowing incoming Telnet access is not recommended. If you do want to allow inbound Telnet access, you will need to install the telnetserver package. WWW (HTTP) The HTTP protocol is used by Apache (and by other Web servers) to serve webpages. If you plan on making your Web server publicly available, enable this option. This option is not required for viewing pages locally or for developing webpages. You will need to install the httpd package if you want to serve webpages. Enabling WWW (HTTP) will not open a port for HTTPS. To enable HTTPS, specify it in the Other ports field. Mail (SMTP) If you want to allow incoming mail delivery through your firewall, so that remote hosts can connect directly to your machine to deliver mail, enable this option. You do not need to enable this if you collect your mail from your ISP's server using POP3 or IMAP, or if you use a tool such as fetchmail. Note that an improperly configured SMTP server can allow remote machines to use your server to send spam. FTP The FTP protocol is used to transfer files between machines on a network. If you plan on making your FTP server publicly available, enable this option. You must install the vsftpd package for this option to be useful. 207 Other ports You can allow access to ports which are not listed here, by listing them in the Other ports field. Use the following format: port:protocol. For example, if you want to allow IMAP access through your firewall, you can specify imap:tcp. You can also explicitly specify numeric ports; to allow UDP packets on port 1234 through the firewall, enter 1234:udp. To specify multiple ports, separate them with commas. Tip To change your security level configuration after you have completed the installation, use the Security Level Configuration Tool. Type the redhat-config-securitylevel command in a shell prompt to launch the Security Level Configuration Tool. If you are not root, it will prompt you for the root password to continue. Prev Home Network Configuration More Linux Server Topics Up Next Language Support Selection - Network Diagram - About This Site Chapter 13 Linux FTP Server Setup =========================================== In This Chapter Chapter 13 Linux FTP Server Setup FTP Overview Problems With FTP And Firewalls 208 How To Download And Install The VSFTP Package How To Get VSFTP Started Testing To See If VSFTP Is Running What Is Anonymous FTP? The /etc/vsftpd.conf File FTP Security Issues Example #1: © Peter Harrison, www.linuxhomenetworking.com =========================================== This chapter will show you how to convert your Linux box into an FTP server using the VSFTP package. The RedHat software download site runs on VSFTP. FTP Overview File Transfer Protocol (FTP) is a common method of copying files between computer systems. Two TCP ports are used to do this: 209 FTP Control Channel - TCP Port 21 All commands you send and the ftp server's responses to those commands will go over the control connection, but any data sent back (such as "ls" directory lists or actual file data in either direction) will go over the data connection. FTP Data Channel - TCP Port 20 Used for all data sent between the client and server. Active FTP Active FTP works as follows: o Your client connects to the FTP server by establishing an FTP control connection to port 21 of the server. Your commands such as 'ls' and 'get' are sent over this connection. o Whenever the client requests data over the control connection, the server initiates data transfer connections back to the client. The source port of these data transfer connections is always port 20 on the server, and the destination port is a high port on the client. o Thus the 'ls' listing that you asked for comes back over the "port 20 to high port connection", not the port 21 control connection. o FTP active mode data transfer therefore does this in a counter intuitive way to the TCP standard as it selects port 20 as it's source port (not a random high port > 1024) and connects back to the client on a random high port that has been pre-negotiated on the port 21 control connection. o Active FTP may fail in cases where the client is protected from the Internet via many to one NAT (masquerading). This is because the firewall will not know which of the many servers behind it should receive the return connection. 210 Passive FTP Passive FTP works as follows: o Your client connects to the FTP server by establishing a FTP control connection to port 21 of the server. Your commands such as 'ls' and 'get' are sent over that connection. o Whenever the client requests data over the control connection, the client initiates the data transfer connections to the server. The source port of these data transfer connections is always a high port on the client with a destination port of a high port on the server. o Passive FTP should be viewed as the server never making an active attempt to connect to the client for FTP data transfers. o Passive FTP works better for clients protected by a firewall as the client always initiates the required connections. Problems With FTP And Firewalls FTP frequently fails when the data has to pass through a firewall as FTP uses a wide range of unpredictable TCP ports and firewalls are designed to limit data flows to predictable TCP ports. There are ways to overcome this as explained in the following sections. The Appendix has examples of how to configure the iptables Linux filewall to function with both active and passive FTP. Client Protected By A Firewall Problem Typically firewalls don't let any incoming connections at all, this will frequently cause active FTP not to function. This type of FTP failure has the following symptoms: o The active ftp connection appears to work when the client initiates an outbound connection to the server on port 21. The connection appears to hang as soon as you do an "ls" or a "dir" or a "get". This is because the firewall is blocking the return connection from the server to the client. (From port 20 on the server to a high port on the client) Solutions Here are the general firewall rules you'll need to allow FTP clients through a firewall: 211 212 Client Protected by Firewall - Required Rules for FTP Method Source Address Source Port Destination Destination Connection Address Port Type Allow outgoing control connections to server Control Channel FTP client/ network High FTP server** 21 New FTP server** 21 FTP client/ network High Established* Allow the client to establish data channels to remote server Active FTP Passive FTP FTP server** 20 FTP client /network High New FTP client/ network High FTP server** 20 Established* FTP client/ network High FTP server** High New FTP server** High FTP client/ network High Established* *Many home based firewall/routers automatically allow traffic for already established connections. This rule may not be necessary in all cases. 213 ** in some cases, you may want to allow all Internet users to have access, not just a specific client server or network. Server Protected By A Firewall Problem o Typically firewalls don't let any connections come in at all. FTP server failure due to firewalls in which the active ftp connection from the client doesn't appear to work at all Solutions Here are the general firewall rules you'll need to allow FTP severs through a firewall 214 Server Protected by Firewall - Required Rules for FTP Method Source Address Source Port Destination Destination Connection Address Port Type Allow incoming control connections to server Control Channel FTP client/ network** High FTP server 21 New FTP server 21 FTP client/ network** High Established* Allow server to establish data channel to remote client Active FTP Passive FTP FTP server 20 FTP client/network** High New FTP client/ network** High FTP server 20 Established* FTP client/ network** High FTP server High New FTP server High FTP client/ network** High Established* *Many home based firewall/routers automatically allow traffic for already established connections. This rule may not be necessary in all cases. 215 ** in some cases, you may want to allow all Internet users to have access, not just a specific client server or network. How To Download And Install The VSFTP Package As explained previously, RedHat software is installed using RPM packages. In version 8.0 of the operating system, the VSFTP RPM file is named: vsftpd-1.1.0-1.i386.rpm Downloading and installing RPMs isn’t hard. If you need a refresher, the RPM chapter covers how to do this in detail. Now download the file to a directory such as /tmp and install it using the “rpm” command: [root@bigboy Preparing... [100%] 1:vsftpd [100%] [root@bigboy tmp]# rpm -Uvh vsftpd-1.1.0-1.i386.rpm ########################################### ########################################### tmp]# How To Get VSFTP Started The starting and stopping of VSFTP is controlled by xinetd via the /etc/xinetd.d/vsftpd file. VSFTP is deactivated by default, so you’ll have to edit this file to start the program. Make sure the contents look like this. The disable feature must be set to "no" to accept connections. service ftp { disable = no socket_type = stream wait = no user = root server = /usr/sbin/vsftpd nice = 10 216 } You will then have to restart xinetd for these changes to take effect using the startup script in the /etc/init.d directory. [root@aqua tmp]# /etc/init.d/xinetd restart Stopping xinetd: [ OK ] Starting xinetd: [ OK ] [root@aqua tmp]# Naturally, to disable VSFTP once again, you’ll have to edit /etc/xinetd.d/vsftpd, set “disable” to “yes” and restart xinetd. Testing To See If VSFTP Is Running You can always test whether the VSFTP process is running by using the netstat –a command which lists all the TCP and UDP ports on which the server is listening for traffic. The example below shows the expected output, there would be no output at all if VSFTP wasn’t running. [root@bigboy root]# netstat -a | grep ftp tcp 0 0 *:ftp [root@bigboy root]# *:* LISTEN 217 What Is Anonymous FTP? Anonymous FTP is used by web sites that need to exchange files with numerous unknown remote users. Common uses include downloading software updates and MP3s to uploading diagnostic information for a technical support engineer’s attention. Unlike regular FTP where you login with a user-specific username, anonymous FTP only requires a username of "anonymous" and your email address for the password. Once logged in to a VSFTP server, you’ll automatically have access to only the default anonymous FTP directory /var/ftp and all its subdirectories. As seen in the chapter on RPMs, using anonymous FTP as a remote user is fairly straight forward. VSFTP can be configured to support user based and or anonymous FTP in its configuration file. The /etc/vsftpd.conf File VSFTP only reads the contents of its /etc/vsftpd.conf configuration file when it starts, so you’ll have to restart xinetd each time you edit the file in order for the changes to take effect. This file uses a number of default settings you need to know. By default, VSFTP runs as an anonymous FTP server. Unless you want any remote user to log into to your default FTP directory using a username of “ananoymous” and a password that’s the same as their email address, I would suggest turning this off. The configuration file’s anonymous_enable instruction can be commented out by using a “#” to disable this feature. You’ll also want to simultaneously enable local users to be able to log in by uncommenting the local_enable instruction. By default VSFTP only allows anonymous FTP downloads to remote users, not uploads from them. Also by default, VSFTP doesn't allow remote users to create directories on your FTP server and it logs FTP access to the /var/log/vsftpd.log log file. The configuration file is fairly straight forward as you can see in the snippet below. Remove/add the "#" at the beginning of the line to "activate/deactivate" the feature on each line. # Allow anonymous FTP? anonymous_enable=YES ... ... 218 # Uncomment this to allow local users to log in. local_enable=YES ... ... # Uncomment this to enable any form of FTP write command. # (Needed even if you want local users to be able to upload files) write_enable=YES ... ... # Uncomment to allow the anonymous FTP user to upload files. This only # has an effect if global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES ... ... # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES ... ... # Activate logging of uploads/downloads. xferlog_enable=YES ... ... # You may override where the log file goes if you like. # The default is shown# below. #xferlog_file=/var/log/vsftpd.log FTP Security Issues The /etc/vsftpd.ftpusers File For added security you may restrict FTP access to certain users by adding them to the list of users in this file. Do not delete entries from the default list, it is best to add. 219 Anonymous Upload If you want remote users to write data to your FTP server then it is recommended you create a write-only directory within /var/ftp/pub. This will allow your users to upload, but not access other files uploaded by other users. Here are the commands to do this: [root@bigboy tmp]# mkdir /var/ftp/pub/upload [root@bigboy tmp]# chmod 733 /var/ftp/pub/upload FTP Greeting Banner Change the default greeting banner in /etc/vsftpd.conf to make it harder for malicious users to determine the type of system you have. ftpd_banner= New Banner Here Using SCP As Secure Alternative To FTP One of the disadvantages of FTP is that it does not encrypt your username and password. This could make your user account vulnerable to an unauthorized attack from a person eavesdropping on the network connection. Secure Copy (SCP) provides encryption and could be considered as an alternative to FTP for trusted users. SCP however does not support anonymous services, a feature that FTP does. 220 Example #1: FTP Users With Only Read Access To A Shared Directory In this example, anonymous FTP is not desired, but a group of trusted users need to have read only access to a directory for downloading files. Here are the steps: o Enable FTP. Edit the /etc/xinetd.d/vsftp and set the disable value to "no". o Disable anonymous FTP. Comment out the anonymous_enable line in the /etc/vsftpd.conf file like this: # Allow anonymous FTP? # anonymous_enable=YES o Enable individual logins by making sure you have the local_enable line uncommented in the /etc/vsftpd.conf file like this: # Uncomment this to allow local users to log in. local_enable=YES o Create a user group and shared directory. In this case we’ll use "/home/ftpusers" and a user group name of "ftp-users” for the remote users. [root@bigboy tmp]# groupadd ftp-users [root@bigboy tmp]# mkdir /home/ftp-docs o Make the directory accessible to the ftp-users group. [root@bigboy tmp]# chmod 750 /home/ftp-docs [root@bigboy tmp]# chown root:ftp-users /home/ftp-docs o Add users, and make their default directory /home/ftp-docs 221 [root@bigboy docs user1 [root@bigboy docs user2 [root@bigboy docs user3 [root@bigboy docs user4 [root@bigboy [root@bigboy [root@bigboy [root@bigboy tmp]# useradd -g ftp-users -d /home/ftptmp]# useradd -g ftp-users -d /home/ftptmp]# useradd -g ftp-users -d /home/ftptmp]# useradd -g ftp-users -d /home/ftptmp]# tmp]# tmp]# tmp]# passwd passwd passwd passwd user1 user2 user3 user4 o Copy files to be downloaded by your users into the /home/ftp-docs directory o Change the permissions of the files in the /home/ftp-docs directory for read only access by the group [root@bigboy tmp]# chown root:ftp-users /home/ftp-docs/* [root@bigboy tmp]# chmod 740 /home/ftp-docs/* Users should now be able to log in via ftp to the server using their new user names and passwords. If you absolutely don't want any FTP users to be able to write to any directory then you should comment out the write_enable line in your /etc/vsftpd.conf file like this: #write_enable=YES o Restart vsftp for the configuration file changes to take effect. [root@bigboy tmp]# /etc/init.d/xinetd restart Stopping xinetd: [ OK ] Starting xinetd: [ OK ] [root@bigboy tmp]# Sample Login Session To Test Funtionality o Check for the presence of a test file on the ftp client server. 222 [root@smallfry tmp]# ll total 1 -rw-r--r-- 1 root root 0 Jan 4 09:08 testfile [root@smallfry tmp]# o Connect to bigboy via FTP [root@smallfry tmp]# ftp 192.168.1.100 Connected to 192.168.1.100 (192.168.1.100). 220 ready, dude (vsFTPd 1.1.0: beat me, break me) Name (192.168.1.100:root): user1 331 Please specify the password. Password: 230 Login successful. Have fun. Remote system type is UNIX. Using binary mode to transfer files. ftp> o As expected, we can't do an upload transfer of "testfile" to bigboy. ftp> put testfile local: testfile remote: testfile 227 Entering Passive Mode (192,168,1,100,181,210) 553 Could not create file. ftp> o We can view and download a copy of the VSFTP RPM ftp> ls 227 Entering Passive Mode (192,168,1,100,35,173) 150 Here comes the directory listing. -rwxr----- 1 0 502 76288 Jan 04 17:06 vsftpd-1.1.01.i386.rpm 226 Directory send OK. ftp> get vsftpd-1.1.0-1.i386.rpm vsftpd-1.1.01.i386.rpm.tmp local: vsftpd-1.1.0-1.i386.rpm.tmp remote: vsftpd-1.1.01.i386.rpm 227 Entering Passive Mode (192,168,1,100,44,156) 150 Opening BINARY mode data connection for vsftpd1.1.0-1.i386.rpm (76288 bytes). 226 File send OK. 76288 bytes received in 0.499 secs (1.5e+02 Kbytes/sec) ftp> exit 221 Goodbye. [root@smallfry tmp]# o As expected, we can't do anonymous ftp. 223 [root@smallfry tmp]# ftp 192.168.1.100 Connected to 192.168.1.100 (192.168.1.100). 220 ready, dude (vsFTPd 1.1.0: beat me, break me) Name (192.168.1.100:root): anonymous 331 Please specify the password. Password: 530 Login incorrect. Login failed. ftp> quit 221 Goodbye. [root@smallfry tmp]# 224 Return to Linux Home Page. Configuring Telnet/FTP to login as root (Linux) by Jeff Hunter, Sr. Database Administrator Contents 1. Red Hat Enterprise Linux: RHEL3 / RHEL4 2. Red Hat (Fedora Core 1 / Core 2) 3. Red Hat (Release 7.x - 8.x) Red Hat Enterprise Linux: RHEL3 / RHEL4 Enabling Telnet and FTP Services Linux is configured to run the Telnet and FTP server, but by default, these services are not enabled. To enable the telnet service, login to the server as the root user account and run the following commands: # chkconfig telnet on # service xinetd reload Reloading configuration: [ OK ] Starting with the Red Hat Enterprise Linux 3.0 release (and in CentOS Enterprise Linux), the FTP server (wu-ftpd) is no longer available with xinetd. It has been replaced with vsftp and can be started from /etc/init.d/vsftpd as in the following: # /etc/init.d/vsftpd start Starting vsftpd for vsftpd: [ OK ] If you want the vsftpd service to start and stop when recycling (rebooting) the machine, you can create the following symbolic links: # ln -s /etc/init.d/vsftpd /etc/rc3.d/S56vsftpd # ln -s /etc/init.d/vsftpd /etc/rc4.d/S56vsftpd 225 # ln -s /etc/init.d/vsftpd /etc/rc5.d/S56vsftpd Allowing Root Logins to Telnet and FTP Services Now before getting into the details of how to configure Red Hat Linux for root logins, keep in mind that this is VERY BAD security. Make sure that you NEVER configure your production servers for this type of login. Configure Telnet for root logins Simply edit the file /etc/securetty and add the following to the end of the file: pts/0 pts/1 pts/2 pts/3 pts/4 pts/5 pts/6 pts/7 pts/8 pts/9 This will allow up to 10 telnet sessions to the server as root. Configure FTP for root logins Edit the files /etc/vsftpd.ftpusers and /etc/vsftpd.user_list and remove the 'root' line from each file. Red Hat (Fedora Core 1 / Core 2) Enabling Telnet and FTP Services Linux is configured to run the Telnet and FTP server, but by default, these services are not enabled. To enable the telnet these service, login to the server as the root userid and edit the files: /etc/xinetd.d/telnet 226 In this file, find the line for disable and change it from the value "yes" to "no". After changing the above value(s), you will need to restart the xinetd deamon. As the root userid, type the following command: % /etc/init.d/xinetd reload Starting with the Fedora Core 1 release, the FTP server (wu-ftpd) is no longer available with xinetd. It has been replaced with vsftp and can be started from /etc/init.d/vsftpd as in the following: # /etc/init.d/vsftpd start If you want the vsftpd service to start and stop when recycling the machine, you can create the following symbolic links: # ln -s /etc/init.d/vsftpd /etc/rc3.d/S56vsftpd # ln -s /etc/init.d/vsftpd /etc/rc4.d/S56vsftpd # ln -s /etc/init.d/vsftpd /etc/rc5.d/S56vsftpd Allowing Root Logins to Telnet and FTP Services Now before getting into the details of how to configure Red Hat Linux for root logins, keep in mind that this is VERY BAD security. Make sure that you NEVER configure your production servers for this type of login. Configure Telnet for root logins Simply edit the file /etc/securetty and add the following to the end of the file: pts/0 pts/1 pts/2 pts/3 pts/4 pts/5 pts/6 pts/7 pts/8 pts/9 227 This will allow up to 10 telnet sessions to the server as root. Configure FTP for root logins Edit the files /etc/vsftpd.ftpusers and /etc/vsftpd.user_list and remove the 'root' line from each file. Red Hat (Release 7.x - 8.x) Enabling Telnet and FTP Services Linux is configured to run the Telnet and FTP server, but by default, these services are not enabled. To enable these services, login to the server as the root userid and edit the files: /etc/xinetd.d/telnet /etc/xinetd.d/wu-ftpd In both files, find the line for disable and change it from the value "yes" to "no". After changing the above values, you will need to restart the xinetd deamon. As the root userid, type the following command: % /etc/init.d/xinetd reload Allowing Root Logins to Telnet and FTP Services Now before getting into the details of how to configure Red Hat Linux for root logins, keep in mind that this is VERY BAD security. Make sure that you NEVER configure your production servers for this type of login. Configure Telnet for root logins Simply edit the file /etc/securetty and add the following to the end of the file: pts/0 pts/1 pts/2 pts/3 pts/4 pts/5 pts/6 228 pts/7 pts/8 pts/9 This will allow up to 10 telnet sessions to the server as root. Configure FTP for root logins First edit the file /etc/ftpaccess and comment out the 'deny-uid' and 'deny-gid' lines. Also, don't forget to remove the 'root' line from /etc/ftpusers Last modified on: Tuesday, 05-Sep-2006 18:41:01 EDT Page Count: 420929 Forums Corrections partner-pub-0304 FORID:10 ISO-8859-1 About (c) Peter Harrison Linux is growing. It is now used in many everyday gadgets and is no longer just for systems administrators. This site covers topics needed for Linux software certification exams, such as the Red Hat Certified Engineer (RHCE), Ubuntu Certified Professional (UCP), and many computer training courses. The data center section outlines the rationale for either hosting your website in a server room or using a purpose built colocation facility and run by a third party. It also covers how to migrate your existing servers from one data center to another with many of the required check lists. The introductory Cisco VPN and firewall networking sections should be interesting for CCNA, CCNP and CCIE candidates too. Most of the site's topics are available for purchase as PDF documents. Search 229 From Linux Home Networking Jump to: navigation, search 230 Contents 1 Sponsors 2 Introduction 3 FTP Overview o 3.1 Types of FTP 3.1.1 Figure 15-1 Active And Passive FTP Illustrated 3.1.2 Active FTP 3.1.3 Passive FTP 3.1.4 Regular FTP 3.1.5 Anonymous FTP 4 Problems With FTP And Firewalls o 4.1 Client Protected By A Firewall Problem o 4.1.1 Table 15-1 Client Protected by Firewall - Required Rules for FTP 4.2 Server Protected By A Firewall Problem 4.2.1 Table 15-2 Rules needed to allow FTP servers through a firewall. 5 How To Download And Install VSFTPD 6 How To Get VSFTPD Started 7 Testing the Status of VSFTPD 8 The vsftpd.conf File o 8.1 Other vsftpd.conf Options 9 FTP Security Issues o 9.1 The /etc/vsftpd.ftpusers File o 9.2 Anonymous Upload o 9.3 FTP Greeting Banner 231 o 9.4 Using SCP As Secure Alternative To FTP 10 Troubleshooting FTP 11 Tutorial o 11.1 FTP Users with Only Read Access to a Shared Directory o 11.2 Sample Login Session To Test Functionality 12 Conclusion Sponsors Introduction The File Transfer Protocol (FTP) is used as one of the most common means of copying files between servers over the Internet. Most web based download sites use the built in FTP capabilities of web browsers and therefore most server oriented operating systems usually include an FTP server application as part of the software suite. Linux is no exception. This chapter will show you how to convert your Linux box into an FTP server using the default Very Secure FTP Daemon (VSFTPD) package included in Fedora. FTP Overview FTP relies on a pair of TCP ports to get the job done. It operates in two connection channels as I'll explain: FTP Control Channel, TCP Port 21: All commands you send and the ftp server's responses to those commands will go over the control connection, but any data sent back (such as "ls" directory lists or actual file data in either direction) will go over the data connection. FTP Data Channel, TCP Port 20: This port is used for all subsequent data transfers between the client and server. In addition to these channels, there are several varieties of FTP. 232 Types of FTP From a networking perspective, the two main types of FTP are active and passive. In active FTP, the FTP server initiates a data transfer connection back to the client. For passive FTP, the connection is initiated from the FTP client. These are illustrated in Figure 15-1. Figure 15-1 Active And Passive FTP Illustrated From a user management perspective there are also two types of FTP: regular FTP in which files are transferred using the username and password of a regular user FTP server, and anonymous FTP in which general access is provided to the FTP server using a well known universal login method. Take a closer look at each type. Active FTP The sequence of events for active FTP is: 1. Your client connects to the FTP server by establishing an FTP control connection to port 21 of the server. Your commands such as 'ls' and 'get' are sent over this connection. 2. Whenever the client requests data over the control connection, the server initiates data transfer connections back to the client. The source port of these data transfer connections is always port 20 on the server, and the destination port is a high port (greater than 1024) on the client. 3. Thus the ls listing that you asked for comes back over the port 20 to high port connection, not the port 21 control connection. FTP active mode therefore transfers data in a counter intuitive way to the TCP standard, as it selects port 20 as it's source port (not a random high port that's greater than 1024) and connects back to the client on a random high port that has been pre-negotiated on the port 21 control connection. Active FTP may fail in cases where the client is protected from the Internet via many to one NAT (masquerading). This is because the firewall will not know which of the many servers behind it should receive the return connection. 233 Passive FTP Passive FTP works differently: 1. Your client connects to the FTP server by establishing an FTP control connection to port 21 of the server. Your commands such as ls and get are sent over that connection. 2. Whenever the client requests data over the control connection, the client initiates the data transfer connections to the server. The source port of these data transfer connections is always a high port on the client with a destination port of a high port on the server. Passive FTP should be viewed as the server never making an active attempt to connect to the client for FTP data transfers. Because client always initiates the required connections, passive FTP works better for clients protected by a firewall. As Windows defaults to active FTP, and Linux defaults to passive, you'll probably have to accommodate both forms when deciding upon a security policy for your FTP server. Regular FTP By default, the VSFTPD package allows regular Linux users to copy files to and from their home directories with an FTP client using their Linux usernames and passwords as their login credentials. VSFTPD also has the option of allowing this type of access to only a group of Linux users, enabling you to restrict the addition of new files to your system to authorized personnel. The disadvantage of regular FTP is that it isn't suitable for general download distribution of software as everyone either has to get a unique Linux user account or has to use a shared username and password. Anonymous FTP allows you to avoid this difficulty. Anonymous FTP Anonymous FTP is the choice of Web sites that need to exchange files with numerous unknown remote users. Common uses include downloading software updates and MP3s and uploading diagnostic information for a technical support engineers' attention. Unlike regular FTP where you login with a preconfigured Linux username and password, anonymous FTP requires only a username of anonymous and your email address for the password. Once logged in to a 234 VSFTPD server, you automatically have access to only the default anonymous FTP directory (/var/ftp in the case of VSFTPD) and all its subdirectories. As seen in Chapter 6, "Installing Linux Software", using anonymous FTP as a remote user is fairly straight forward. VSFTPD can be configured to support user-based and or anonymous FTP in its configuration file which you'll see later. Problems With FTP And Firewalls FTP frequently fails when the data has to pass through a firewall, because firewalls are designed to limit data flows to predictable TCP ports and FTP uses a wide range of unpredictable TCP ports. You have a choice of methods to overcome this. Note: The Appendix II, "Codes, Scripts, and Configurations", contains examples of how to configure the VSFTPD Linux firewall to function with both active and passive FTP. Client Protected By A Firewall Problem Typically firewalls don't allow any incoming connections at all, which frequently blocks active FTP from functioning. With this type of FTP failure, the active FTP connection appears to work when the client initiates an outbound connection to the server on port 21. The connection then appears to hang, however, as soon as you use the ls, dir, or get commands. The reason is that the firewall is blocking the return connection from the server to the client (from port 20 on the server to a high port on the client). If a firewall allows all outbound connections to the Internet, then passive FTP clients behind a firewall will usually work correctly as the clients initiate all the FTP connections. Solution Table 15-1 shows the general rules you'll need to allow FTP clients through a firewall: Table 15-1 Client Protected by Firewall - Required Rules for FTP Method Source Source Destination Destination Connection 235 Address Port Address Port Type Allow outgoing control connections to server Control Channel FTP client / network High1 FTP server2 21 New FTP server2 21 FTP client/ network High Established3 Allow the client to establish data channels to remote server Active FTP Passive FTP 1 FTP server 2 20 FTP client / network High New FTP client / network High FTP server 2 20 Established3 FTP client / network High FTP server 2 High New FTP server 2 High FTP client / network High Established 3 Greater than 1024. 2 In some cases, you may want to allow all Internet users to have access, not just a specific client server or network. 3 Many home-based firewall/routers automatically allow traffic for already established connections. This rule may not be necessary in all cases. Server Protected By A Firewall Problem Typically firewalls don't let any connections come in at all. When a an incorrectly configured firewall protects an FTP server, the FTP connection from the client doesn't appear to work at all for both active and passive FTP. Solution 236 Table 15-2 Rules needed to allow FTP servers through a firewall. Source Address Method Source Port Destination Address Destination Port Connection Type Allow incoming control connections to server Control Channel FTP client / network 2 High1 FTP server 21 New FTP server 21 FTP client / network 2 High Established3 Allow server to establish data channel to remote client Active FTP Passive FTP 1 FTP server 20 FTP client / network 2 High New FTP client / network 2 High FTP server 20 Established3 FTP client / network 2 High FTP server High New FTP server High FTP client / network 2 High Established 3 Greater than 1024. 2 In some cases, you may want to allow all Internet users to have access, not just a specific client server or network. 3Many home-based firewall/routers automatically allow traffic for already established connections. This rule may not be necessary in all cases. 237 How To Download And Install VSFTPD Most Linux software products are available in a precompiled package format. Downloading and installing packages isn't hard. If you need a refresher, Chapter 6, "Installing Linux Software", covers how to do this in detail. It is best to use the latest version of VSFTPD. When searching for the file, remember that the VSFTPD packages' filename usually starts with the word vsftpd followed by a version number, as in vsftpd1.2.1-5.i386.rpm for Redhat/Fedora or vsftpd_2.0.4-0ubuntu4_i386.deb for Ubuntu. How To Get VSFTPD Started With Fedora, Redhat, Ubunbtu and Debian You can start, stop, or restart VSFTPD after booting by using these commands: [root@bigboy tmp]# /etc/init.d/vsftpd start [root@bigboy tmp]# /etc/init.d/vsftpd stop [root@bigboy tmp]# /etc/init.d/vsftpd restart With Redhat / Fedora you can configure VSFTPD to start at boot you can use the chkconfig command. [root@bigboy tmp]# chkconfig vsftpd on With Ubuntu / Debian the sysv-rc-conf command can be used like this: root@u-bigboy:/tmp# sysv-rc-conf on Note: In RedHat Linux version 8.0 and earlier, VSFTPD operation is controlled by the xinetd process, which is covered in Chapter 16, "Telnet, TFTP, and xinetd". You can find a full description of how to configure these versions of Linux for VSFTPD in Appendix III, "Fedora Version Differences." 238 Testing the Status of VSFTPD You can always test whether the VSFTPD process is running by using the netstat a command which lists all the TCP and UDP ports on which the server is listening for traffic. This example shows the expected output. [root@bigboy root]# netstat -a | grep ftp tcp 0 0 *:ftp *:* LISTEN [root@bigboy root]# If VSFTPD wasn't running, there would be no output at all. The vsftpd.conf File VSFTPD only reads the contents of its vsftpd.conf configuration file only when it starts, so you'll have to restart VSFTPD each time you edit the file in order for the changes to take effect. The file may be located in either the /etc or the /etc/vsftpd directories depending on your Linux distribution. This file uses a number of default settings you need to know about. VSFTPD runs as an anonymous FTP server. Unless you want any remote user to log into to your default FTP directory using a username of anonymous and a password that's the same as their email address, I would suggest turning this off. The configuration file's anonymous_enable directive can be set to no to disable this feature. You'll also need to simultaneously enable local users to be able to log in by removing the comment symbol (#) before the local_enable instruction. If you enable anonymous FTP with VSFTPD, remember to define the root directory that visitors will visit. This is done with the anon_root directive. anon_root=/data/directory VSFTPD allows only anonymous FTP downloads to remote users, not uploads from them. This can be changed by modifying the anon_upload_enable directive shown later. VSFTPD doesn't allow anonymous users to create directories on your FTP server. You can change this by modifying the anon_mkdir_write_enable directive. VSFTPD logs FTP access to the /var/log/vsftpd.log log file. You can change this by modifying the xferlog_file directive. 239 By default VSFTPD expects files for anonymous FTP to be placed in the /var/ftp directory. You can change this by modifying the anon_root directive. There is always the risk with anonymous FTP that users will discover a way to write files to your anonymous FTP directory. You run the risk of filling up your /var partition if you use the default setting. It is best to make the anonymous FTP directory reside in its own dedicated partition. The configuration file is fairly straight forward as you can see in the snippet below where we enable anonymous FTP and individual accounts simultaneously. # Allow anonymous FTP? anonymous_enable=YES ... # The directory which vsftpd will try to change # into after an anonymous login. (Default = /var/ftp) anon_root=/data/directory ... # Uncomment this to allow local users to log in. local_enable=YES ... # Uncomment this to enable any form of FTP write command. # (Needed even if you want local users to be able to upload files) write_enable=YES ... # Uncomment to allow the anonymous FTP user to upload files. This only # has an effect if global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES ... # Uncomment this if you want the anonymous FTP user to be able to create 240 # new directories. #anon_mkdir_write_enable=YES ... # Activate logging of uploads/downloads. xferlog_enable=YES ... # You may override where the log file goes if you like. # The default is shown below. xferlog_file=/var/log/vsftpd.log ... To activate or deactivate a feature, remove or add the # at the beginning of the appropriate line. Other vsftpd.conf Options There are many other options you can add to this file: Limiting the maximum number of client connections (max_clients) Limiting the number of connections by source IP address (max_per_ip) The maximum rate of data transfer per anonymous login. (anon_max_rate) The maximum rate of data transfer per non-anonymous login. (local_max_rate) Descriptions on this and more can be found in the vsftpd.conf man pages. FTP Security Issues FTP has a number of security drawbacks, but you can overcome them in some cases. You can restrict an individual Linux user's access to non-anonymous FTP, and you can change the configuration to not display the FTP server's software version information, but unfortunately, though very convenient, FTP logins and data transfers are not encrypted. 241 The /etc/vsftpd.ftpusers File For added security, you may restrict FTP access to certain users by adding them to the list of users in the /etc/vsftpd.ftpusers file. The VSFTPD package creates this file with a number of entries for privileged users that normally shouldn't have FTP access. As FTP doesn't encrypt passwords, thereby increasing the risk of data or passwords being compromised, it is a good idea to let these entries remain and add new entries for additional security. Anonymous Upload If you want remote users to write data to your FTP server, then you should create a write-only directory within /var/ftp/pub. This will allow your users to upload but not access other files uploaded by other users. The commands you need are: [root@bigboy tmp]# mkdir /var/ftp/pub/upload [root@bigboy tmp]# chmod 722 /var/ftp/pub/upload FTP Greeting Banner Change the default greeting banner in the vsftpd.conf file to make it harder for malicious users to determine the type of system you have. The directive in this file is. ftpd_banner= New Banner Here Using SCP As Secure Alternative To FTP One of the disadvantages of FTP is that it does not encrypt your username and password. This could make your user account vulnerable to an unauthorized attack from a person eavesdropping on the network connection. Secure Copy (SCP) and Secure FTP (SFTP) provide encryption and could be considered as an alternative to FTP for trusted users. SCP does not support anonymous services, however, a feature that FTP does support. 242 Troubleshooting FTP You should always test your FTP installation by attempting to use an FTP client to log in to your FTP server to transfer sample files. The most common sources of day-to-day failures are incorrect usernames and passwords. Initial setup failures could be caused by firewalls along the path between the client and server blocking some or all types of FTP traffic. Typical symptoms of this are either connection timeouts or the ability to use the ls command to view the contents of a directory without the ability to either upload or download files. Follow the firewall rule guidelines to help overcome this problem. Connection problems could also be the result of typical network issues outlined in Chapter 4, "Simple Network Troubleshooting". Tutorial FTP has many uses, one of which is allowing numerous unknown users to download files. You have to be careful, because you run the risk of accidentally allowing unknown persons to upload files to your server. This sort of unintended activity can quickly fill up your hard drive with illegal software, images, and music for the world to download, which in turn can clog your server's Internet access and drive up your bandwidth charges. FTP Users with Only Read Access to a Shared Directory In this example, anonymous FTP is not desired, but a group of trusted users need to have read only access to a directory for downloading files. Here are the steps: 1) Disable anonymous FTP. Comment out the anonymous_enable line in the vsftpd.conf file like this: # Allow anonymous FTP? anonymous_enable=NO 2) Enable individual logins by making sure you have the local_enable line uncommented in the vsftpd.conf file like this: 243 # Uncomment this to allow local users to log in. local_enable=YES 3) Start VSFTP. [root@bigboy tmp]# service vsftpd start 4) Create a user group and shared directory. In this case, use /home/ftp-users and a user group name of ftp-users for the remote users [root@bigboy tmp]# groupadd ftp-users [root@bigboy tmp]# mkdir /home/ftp-docs 5) Make the directory accessible to the ftp-users group. [root@bigboy tmp]# chmod 750 /home/ftp-docs [root@bigboy tmp]# chown root:ftp-users /home/ftp-docs 6) Add users, and make their default directory /home/ftp-docs [root@bigboy tmp]# useradd -g ftp-users -d /home/ftp-docs user1 [root@bigboy tmp]# useradd -g ftp-users -d /home/ftp-docs user2 [root@bigboy tmp]# useradd -g ftp-users -d /home/ftp-docs user3 [root@bigboy tmp]# useradd -g ftp-users -d /home/ftp-docs user4 [root@bigboy tmp]# passwd user1 [root@bigboy tmp]# passwd user2 [root@bigboy tmp]# passwd user3 [root@bigboy tmp]# passwd user4 7) Copy files to be downloaded by your users into the /home/ftp-docs directory 8) Change the permissions of the files in the /home/ftp-docs directory for read only access by the group [root@bigboy tmp]# chown root:ftp-users /home/ftp-docs/* [root@bigboy tmp]# chmod 740 /home/ftp-docs/* Users should now be able to log in via FTP to the server using their new usernames and passwords. If you absolutely don't want any FTP users to be able to write to any directory, then you should set the write_enable line in your vsftpd.conf file to no: write_enable = NO 244 Remember, you must restart VSFTPD for the configuration file changes to take effect. Sample Login Session To Test Functionality Here is a simple test procedure you can use to make sure everything is working correctly: 1) Check for the presence of a test file on the ftp client server. [root@smallfry tmp]# ll total 1 -rw-r--r-- 1 root root 0 Jan 4 09:08 testfile [root@smallfry tmp]# 2) Connect to bigboy via FTP [root@smallfry tmp]# ftp 192.168.1.100 Connected to 192.168.1.100 (192.168.1.100) 220 ready, dude (vsFTPd 1.1.0: beat me, break me) Name (192.168.1.100:root): user1 331 Please specify the password. Password: 230 Login successful. Have fun. Remote system type is UNIX. Using binary mode to transfer files. ftp> As expected, we can't do an upload transfer of testfile to bigboy. ftp> put testfile local: testfile remote: testfile 227 Entering Passive Mode (192,168,1,100,181,210) 553 Could not create file. ftp> 245 But we can view and download a copy of the VSFTPD RPM located on the FTP server bigboy. ftp> ls 227 Entering Passive Mode (192,168,1,100,35,173) 150 Here comes the directory listing. -rwxr----- 1 0 502 76288 Jan 04 17:06 vsftpd-1.1.0-1.i386.rpm 226 Directory send OK. ftp> get vsftpd-1.1.0-1.i386.rpm vsftpd-1.1.0-1.i386.rpm.tmp local: vsftpd-1.1.0-1.i386.rpm.tmp remote: vsftpd-1.1.01.i386.rpm 227 Entering Passive Mode (192,168,1,100,44,156) 150 Opening BINARY mode data connection for vsftpd-1.1.01.i386.rpm (76288 bytes). 226 File send OK. 76288 bytes received in 0.499 secs (1.5e+02 Kbytes/sec) ftp> exit 221 Goodbye. [root@smallfry tmp]# As expected, anonymous FTP fails. [root@smallfry tmp]# ftp 192.168.1.100 Connected to 192.168.1.100 (192.168.1.100) 220 ready, dude (vsFTPd 1.1.0: beat me, break me) Name (192.168.1.100:root): anonymous 331 Please specify the password. Password: 530 Login incorrect. Login failed. ftp> quit 221 Goodbye. 246 [root@smallfry tmp]# Now that testing is complete, you can make this a regular part of your FTP server's operation. Conclusion FTP is a very useful software application that can have enormous benefit to a Web site or to collaborative computing in which files need to be shared between business partners. Although insecure, it is universally accessible, because FTP clients are a part of all operating systems and Web browsers. If data encryption security is of great importance to you, then you should probably consider SCP as a possible alternative. You can find more information on it in Chapter 17, "Secure Remote Logins and File Copying". Retrieved from "http://www.linuxhomenetworking.com/wiki/index.php/Quick_HOWTO_:_Ch15_:_Linux_F TP_Server_Setup" This page was last modified on 15 April 2010, at 05:18. Content is available under Attribution-NonCommercial-NoDerivs 2.5 . Privacy policy About Linux Home Networking Disclaimers docs google:redhat Search Docs: Go utf-8 Red Hat Docs > Manuals > Red Hat Enterprise Linux Manuals > Red Hat Enterprise Linux 3: Reference Guide Prev Chapter 11. FTP Next 247 11.5. vsftpd Configuration Options Although vsftpd may not offer the level of customization other widely available FTP servers have, it offers enough options to fill most administrator's needs. The fact that it is not overly feature-laden limits configuration and programmatic errors. All configuration of vsftpd is handled by its configuration file, /etc/vsftpd/vsftpd.conf. Each directive is on its own line within the file and follows the following format: <directive>=<value> For each directive, replace <directive> with a valid directive and <value> with a valid value. Important There must not be any spaces between the <directive>, equal symbol, and the <value> in a directive. Comment lines must be preceded by a hash mark (#) and are ignored by the daemon. For a complete list of all directives available, refer to the man page for vsftpd.conf. Important For an overview of ways to secure vsftpd, refer to the chapter titled Server Security in the Red Hat Enterprise Linux Security Guide. The following is a list of some of the more important directives within /etc/vsftpd/vsftpd.conf. All directives not explicitly found within vsftpd's configuration file are set to their default value. 11.5.1. Daemon Options The following is a list of directives which control the overall behavior of the vsftpd daemon. 248 — When enabled, vsftpd runs in standalone mode. Red Hat Enterprise Linux sets this value to YES. This directive cannot be used in conjunction with the listen_ipv6 directive. listen The default value is NO. — When enabled, vsftpd runs in standalone mode, but listen only to IPv6 sockets. This directive cannot be used in conjunction with the listen directive. listen_ipv6 The default value is NO. — When enabled, vsftpd attempts to maintain login sessions for each user through Pluggable Authentication Modules (PAM). Refer to Chapter 15 Pluggable Authentication Modules (PAM) for more information about PAM. If session logging is not necessary, disabling this option allows vsftpd to run with less processes and lower privileges. session_support The default value is YES. 11.5.2. Log In Options and Access Controls The following is a list of directives which control the login behavior and access control mechanisms. — When enabled, anonymous users are allowed to log in. The usernames anonymous and ftp are accepted. anonymous_enable The default value is YES. Refer to Section 11.5.3 Anonymous User Options for a list of directives affecting anonymous users. — If the deny_email_enable directive is set to YES, this directive specifies the file containing a list of anonymous email passwords which are not permitted access to the server. banned_email_file The default value is /etc/vsftpd.banned_emails. — Specifies the file containing text displayed when a connection is established to the server. This option overrides any text specified in the ftpd_banner directive. banner_file There is no default value for this directive. 249 — Specifies a comma-delimited list of FTP commands allowed by the server. All other commands are rejected. cmds_allowed There is no default value for this directive. — When enabled, any anonymous user using email passwords specified in the /etc/vsftpd.banned_emails are denied access to the server. The name of the file referenced by this directive can be specified using the banned_email_file directive. deny_email_enable The default value is NO. — When enabled, the string specified within this directive is displayed when a connection is established to the server. This option can be overridden by the banner_file directive. ftpd_banner By default vsftpd displays its standard banner. local_enable — When enabled, local users are allowed to log into the system. The default value is NO. Refer to Section 11.5.4 Local User Options for a list of directives affecting local users. pam_service_name — Specifies the PAM service name for vsftpd. The default value is ftp, however under Red Hat Enterprise Linux the value is set to vsftpd. — When enabled, TCP wrappers are used to grant access to the server. Also, if the FTP server is configured on multiple IP addresses, the VSFTPD_LOAD_CONF option can be used to load different configuration files based on the IP address being requested by the client. For more information about TCP Wrappers, refer to Chapter 16 TCP Wrappers and xinetd. tcp_wrappers The default value is NO, however under Red Hat Enterprise Linux the value is set to YES. — When used in conjunction with the userlist_enable directive and set to NO, all local users are denied access unless the username is listed in the file specified by the userlist_file directive. Because access is denied before the client is asked for a password, setting userlist_deny 250 this directive to NO prevents local users from submitting unencrypted passwords over the network. The default value is YES. userlist_enable — When enabled, the users listed in the file specified by the userlist_file directive are denied access. Because access is denied before the client is asked for a password, users are prevented from submitting unencrypted passwords over the network. The default value is NO, however under Red Hat Enterprise Linux the value is set to YES. userlist_file — Specifies the file referenced userlist_enable directive is enabled. by vsftpd when the The default value is /etc/vsftpd.user_list. — Specifies a comma separated list of FTP commands that the server allows. Any other commands are rejected. cmds_allowed There is no default value for this directive. 11.5.3. Anonymous User Options The following is a list of directives which control anonymous user access to the server. To use these options, the anonymous_enable directive must be set to YES. anon_mkdir_write_enable — When enabled in conjunction with the write_enable directive, anonymous users are allowed to create new directories within a parent directory which has write permissions. The default value is NO. — Specifies the directory vsftpd changes to after an anonymous user logs in. anon_root There is no default value for this directive. anon_upload_enable — When enabled in conjunction with the write_enable directive, anonymous users are allowed to upload within a parent directory which has write permissions. The default value is NO. files 251 — When enabled, anonymous users are only allowed to download world-readable files. anon_world_readable_only The default value is YES. — Specifies the local user account (listed in /etc/passwd) used for the anonymous FTP user. The home directory specified in /etc/passwd for the user is the root directory of the anonymous FTP user. ftp_username The default value is ftp. no_anon_password — When enabled, the anonymous user is not asked for a password. The default value is NO. 11.5.4. Local User Options The following is a list of directives which characterize the way local users access the server. To use these options, the local_enable directive must be set to YES. — When enabled, the FTP command SITE CHMOD is allowed for local users. This command allows the users to change the permissions on files. chmod_enable The default value is YES. chroot_list_enable — When enabled, the local users listed in the file specified in the chroot_list_file directive are placed in a chroot jail upon log in. If enabled in conjunction with the chroot_local_user directive, the local users listed in the file specified in the chroot_list_file directive are not placed in a chroot jail upon log in. The default value is NO. — Specifies the file containing a list of local users referenced when the chroot_list_enable directive is set to YES. chroot_list_file The default value is /etc/vsftpd.chroot_list. — When enabled, local users are change-rooted to their home directories after logging in. chroot_local_user The default value is NO. 252 Warning Using this configuration opens up a number of security issues, especially for users with upload privileges. For this reason, it is not recommended. guest_enable — When enabled, all non-anonymous users as the user guest, which is the local user specified in the guest_username directive. are logged in The default value is NO. guest_username — Specifies the username the guest user is mapped to. The default value is ftp. local_root — Specifies the directory vsftpd changes to after a local user logs in. There is no default value for this directive. passwd_chroot_enable — When enabled in conjunction with the chroot_local_user directive, vsftpd change-roots local users based on the occurrence of the /./ in the home directory field within /etc/passwd. The default value is NO. — Specifies the path to a directory containing configuration files bearing the name of local system users that contain specific setting for that user. Any directive in the user's configuration file overrides those found in /etc/vsftpd/vsftpd.conf. user_config_dir There is no default value for this directive. 11.5.5. Directory Options The following is a list of directives which affect directories. dirlist_enable — When enabled, users are allowed to view directory lists. The default value is YES. — When enabled, a message is displayed whenever a user enters a directory with a message file. This message is found within dirmessage_enable 253 the directory being entered. The name of this file is specified in the message_file directive and is .message by default. The default value is NO, however under Red Hat Enterprise Linux the value is set to YES. — When enabled, files beginning with a dot (.) are listed in directory listings, with the exception of the . and .. files. force_dot_files The default value is NO. — When enabled, all directory listings show ftp as the user and group for each file. hide_ids The default value is NO. message_file — Specifies the name dirmessage_enable directive. of the message file when using the The default value is .message. — When enabled, test usernames and group names are used in place of UID and GID entries. Enabling this option may slow performance of the server. text_userdb_names The default value is NO. — When enabled, directory listings reveal the local time for the computer instead of GMT. use_localtime The default value is NO. 11.5.6. File Transfer Options The following is a list of directives which affect directories. download_enable — When enabled, file downloads are permitted. The default value is YES. — When enabled, all files uploaded by anonymous users are owned by the user specified in the chown_username directive. chown_uploads The default value is NO. chown_username — Specifies the ownership of files if the chown_uploads directive is enabled. anonymously uploaded 254 The default value is root. — When enabled, FTP commands which can change the file system are allowed, such as DELE, RNFR, and STOR. write_enable The default value is NO. 11.5.7. Logging Options The following is a list of directives which affect vsftpd's logging behavior. dual_log_enable — When enabled in conjunction with xferlog_enable, vsftpd writes two files simultaneously: a wu-ftpdcompatible log to the file specified in the xferlog_file directive (/var/log/xferlog by default) and a standard vsftpd log file specified in the vsftpd_log_file directive (/var/log/vsftpd.log by default). The default value is NO. log_ftp_protocol — When enabled in conjunction with xferlog_enable and with xferlog_std_format set to NO, all FTP commands and responses are logged. This directive is useful for debugging. The default value is NO. — When enabled in conjunction with xferlog_enable, all logging normally written to the standard vsftpd log file specified in the vsftpd_log_file directive (/var/log/vsftpd.log by default) is sent to the system logger instead under the FTPD facility. syslog_enable The default value is NO. vsftpd_log_file — Specifies the vsftpd log file. For this file to be used, xferlog_enable must be enabled and xferlog_std_format must either be set to NO or, if xferlog_std_format is set to YES, dual_log_enable must be enabled. It is important to note that if syslog_enable is set to YES, the system log is used instead of the file specified in this directive. The default value is /var/log/vsftpd.log. — When enabled, vsftpd logs connections (vsftpd format only) and file transfer information to the log file specified in the xferlog_enable 255 vsftpd_log_file directive (/var/log/vsftpd.log by default). If xferlog_std_format is set to YES, file transfer information is logged but connections are not, and the log file specified in xferlog_file (/var/log/xferlog by default) is used instead. It is important to note that both log files and log formats are used if dual_log_enable is set to YES. The default value is NO, however under Red Hat Enterprise Linux the value is set to YES. xferlog_file — Specifies the wu-ftpd-compatible log file. For this file to be used, xferlog_enable must be enabled and xferlog_std_format must be set to YES. It is also used if dual_log_enable is set to YES. The default value is /var/log/xferlog. xferlog_std_format — When enabled in conjunction with xferlog_enable, only a wu-ftpd-compatible file transfer log is written to the file specified in the xferlog_file directive (/var/log/xferlog by default). It is important to note that this file only logs file transfers and does not log connections to the server. The default value is NO, however under Red Hat Enterprise Linux the value is set to YES. Important To maintain compatibility with log files written by the older wu-ftpd FTP server, the xferlog_std_format directive is set to YES under Red Hat Enterprise Linux. However, this setting means that connections to the server are not logged. To both log connections in vsftpd format and maintain a wu-ftpdcompatible file transfer log, set dual_log_enable to YES. If maintaining a wu-ftpd-compatible file transfer log is not important, either set xferlog_std_format to NO, comment the line with a hash mark (#), or delete the line entirely. 11.5.8. Network Options The following is a list of directives which affect how vsftpd interacts with the network. 256 — Specifies the amount of time for a client using passive mode to establish a connection. accept_timeout The default value is 60. — Specifies the maximum data transfer rate for anonymous users in bytes per second. anon_max_rate The default value is 0, which does not limit the transfer rate. connect_from_port_20 When enabled, vsftpd runs with enough privileges to open port 20 on the server during active mode data transfers. Disabling this option allows vsftpd to run with less privileges, but may be incompatible with some FTP clients. The default value is NO, however under Red Hat Enterprise Linux the value is set to YES. — Specifies the maximum amount of time a client using active mode has to respond to a data connection, in seconds. connect_timeout The default value is 60. — Specifies maximum amount of time data transfers are allowed to stall, in seconds. Once triggered, the connection to the remote client is closed. data_connection_timeout The default value is 300. ftp_data_port — Specifies the port used for when connect_from_port_20 is set to YES. active data connections The default value is 20. — Specifies the maximum amount of time between commands from a remote client. Once triggered, the connection to the remote client is closed. idle_session_timeout The default value is 300. — Specifies the IP address on which vsftpd listens for network connections. listen_address There is no default value for this directive. Tip 257 If running multiple copies of vsftpd serving different IP addresses, the configuration file for each copy of the vsftpd daemon must have a different value for this directive. Refer to Section 11.4.1 Starting Multiple Copies of vsftpd for more information about multihomed FTP servers. — Specifies the IPv6 address on which vsftpd listens for network connections when listen_ipv6 is set to YES. listen_address6 There is no default value for this directive. Tip If running multiple copies of vsftpd serving different IP addresses, the configuration file for each copy of the vsftpd daemon must have a different value for this directive. Refer to Section 11.4.1 Starting Multiple Copies of vsftpd for more information about multihomed FTP servers. listen_port — Specifies the port on which vsftpd listens for network connections. The default value is 21. — Specifies the maximum rate data is transfered for local users logged into the server in bytes per second. local_max_rate The default value is 0, which does not limit the transfer rate. — Specifies the maximum number of simultaneous clients allowed to connect to the server when it is running in standalone mode, in bytes per second. max_clients The default value is 0, which does not limit connections. — Specifies the maximum of clients allowed to connected from the same source IP address. max_per_ip The default value is 0, which does not limit connections. — Specifies the IP address for the public facing IP address of the server for servers behind Network Address Translation (NAT) firewalls. This enables vsftpd to hand out the correct return address for passive mode connections. pasv_address There is no default value for this directive. 258 pasv_enable — When enabled, passive mode connects are allowed. The default value is YES. — Specifies the highest possible port sent to the FTP clients for passive mode connections. This setting is used to limit the port range so that firewall rules are easier to create. pasv_max_port The default value is 0, which does not limit the highest passive port range. The value must not exceed 65535. — Specifies the lowest possible port sent to the FTP clients for passive mode connections. This setting is used to limit the port range so that firewall rules are easier to create. pasv_min_port The default value is 0, which does not limit the lowest passive port range. The value must not be lower 1024. — When enabled, data connections are not checked to make sure they are originating from the same IP address. This setting is only useful for certain types of tunneling. pasv_promiscuous Caution Do not enable this option unless absolutely necessary as it disables an important security feature which verifies that passive mode connections originate from the same IP address as the control connection that initiates the data transfer. The default value is NO. port_enable — When enabled, active mode connects are allowed. The default value is NO. Prev Home Starting and Stopping vsftpd docs Up Next Additional Resources google:redhat Search Docs: utf-8 Go 259 Red Hat Docs > Manuals > Red Hat Linux Manuals > Red Hat Linux 9 > Red Hat Linux 9: Red Hat Linux Security Guide Prev Chapter 5. Server Security Next 5.6. Securing FTP The File Transport Protocol (FTP) is an older TCP protocol designed to transfer files over a network. Because all transactions with the server, including user authentication, are unencrypted, it is considered an insecure protocol and should be carefully configured. Note Red Hat Linux 9 does not ship with the xinetd-based wu-ftpd service. However, instructions for securing it remain in this section for legacy systems. Red Hat Linux provides three FTP servers. — A kerberized xinetd-based FTP daemon which does not pass authentication information over the network. gssftpd Red Hat Content Accelerator (tux) — A kernel-space Web server with FTP capabilities. vsftpd — A standalone, security oriented implementation of the FTP service. The following security guidelines are for setting up the wu-ftpd and vsftpd services. Warning If you activate both the wu-ftpd and vsftpd services, the xinetd-based wu-ftpd service will handle FTP connections. 260 5.6.1. FTP Greeting Banner Before submitting a user name and password, all users are presented with a greeting banner. By default, this banner includes version information useful to crackers trying to identify weaknesses in a system. To change the greeting banner for vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf: ftpd_banner=<insert_greeting_here> Replace <insert_greeting_here> in the above directive with the text of your greeting message. To change the greeting banner for wu-ftpd, add the following directives to /etc/ftpusers: greeting text <insert_greeting_here> Replace <insert_greeting_here> in the above directive with the text of your greeting message. For mutli-line banners, it is best to use a banner file. To simplify management of multiple banners, we will place all banners in a new directory called /etc/banners/. The banner file for FTP connections in this example will be /etc/banners/ftp.msg. Below is an example of what such a file may look like: #################################################### # Hello, all activity on ftp.example.com is logged.# #################################################### Note It is not necessary to begin each line of the file with 220 as specified in Section 5.1.1.1 TCP Wrappers and Connection Banners. To reference this greeting banner file for vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf: banner_file=/etc/banners/ftp.msg To reference this greeting banner file for wu-ftpd, add the following directives to /etc/ftpusers: greeting terse 261 banner /etc/banners/ftp.msg It also is possible to send additional banners to incoming connections using TCP wrappers as described in Section 5.1.1.1 TCP Wrappers and Connection Banners. 5.6.2. Anonymous Access For both wu-ftpd and vsftpd, the presence of the /var/ftp/ directory activates the anonymous account. The easiest way to create this directory is to install the vsftpd package. This package sets a directory tree up for anonymous users and configures the permissions on directories to read-only for anonymous users. Note For releases before Red Hat Linux 9, you must install the anonftp package to create the /var/ftp/ directory. By default the anonymous user cannot write to any directories. Caution If enabling anonymous access to an FTP server, be careful where you store sensitive data. 5.6.2.1. Anonymous Upload If you want to allow anonymous users to upload, it is recommended you create a write-only directory within /var/ftp/pub/. To do this type: mkdir /var/ftp/pub/upload Next change the permissions so that anonymous users cannot see what is within the directory by typing: chmod 730 /var/ftp/pub/upload A long format listing of the directory should look like this: drwx-wx--- 2 root ftp 4096 Feb 13 20:05 upload 262 Warning Administrators who allow anonymous users to read and write in directories often find that their server become a repository of stolen software. Additionally, under vsftpd, add the following line to /etc/vsftpd/vsftpd.conf: anon_upload_enable=YES 5.6.3. User Accounts Because FTP passes unencrypted usernames and passwords over insecure networks for authentication, it is a good idea to deny system users access to the server from their user accounts. To disable user accounts in wu-ftpd, add the following directive to /etc/ftpusers: deny-uid * To disable user accounts in vsftpd, add the following directive to /etc/vsftpd/vsftpd.conf: local_enable=NO 5.6.3.1. Restricting User Accounts The easiest way to disable a specific group of accounts, such as the root user and those with sudo privileges from accessing an FTP server is to use a PAM list file as described in Section 4.4.2.4 Disabling Root Using PAM. The PAM configuration file for wu-ftpd is /etc/pam.d/ftp. The PAM configuration file for vsftpd is /etc/pam.d/vsftpd. It is also possible to perform this test within each service directly. To disable specific user accounts in wu-ftpd, add the username to /etc/ftpusers: To disable specific user accounts in vsftpd, add the username to /etc/vsftpd.ftpusers: 263 5.6.4. Use TCP Wrappers To Control Access You can use TCP wrappers to control access to either FTP daemon as outlined in Section 5.1.1 Enhancing Security With TCP Wrappers . 5.6.5. Use xinetd To Control the Load If using wu-ftpd, you can use xinetd to control the amount of resources the FTP server consumes and to limit the effects of denial of service attacks. See Section 5.1.2 Enhancing Security With xinetd for more on how to do this. Prev Home Securing Apache HTTP Server docs Next Up Securing Sendmail google:redhat Search Docs: Go utf-8 Red Hat Docs > Manuals > Red Hat Linux Manuals > Red Hat Linux 7.2 > Red Hat Linux 7.2: The Official Red Hat Linux Customization Guide Prev Chapter 14. Apache Configuration Next Saving Your Settings If you do not want to save your Apache configuration settings, click the Cancel button in the bottom right corner of the Apache Configuration Tool window. You will be prompted to confirm this decision. If you click Yes to confirm this choice, your settings will not be saved. If you want to save your Apache configuration settings, click the OK button in the bottom right corner of the Apache Configuration Tool window. The dialog window shown in Figure 14-16 will appear. If you answer Yes, your settings will 264 be saved in /etc/httpd/conf/httpd.conf. Remember that your original configuration file will be overwritten. Figure 14-16. Save and Exit If this is the first time that you have used Apache Configuration Tool, you will see the dialog window shown in Figure 14-17, warning you that the configuration file has been manually modified. If Apache Configuration Tool detects that the httpd.conf configuration file has been manually modified, it will save the manually modified file as /etc/httpd/conf/httpd.conf.bak. Figure 14-17. Configuration File Manually Modified Restart Daemon After saving your settings, you must restart the Apache daemon with the command service httpd restart. You must be logged in as root to execute this command. Prev Home Performance Tuning docs Next Up Additional Resources google:redhat Search Docs: Go utf-8 Red Hat Docs > Manuals > Red Hat Linux Manuals > Red Hat Linux 7.2 > Red Hat Linux 7.2: The Official Red Hat Linux Customization Guide Prev Chapter 14. Apache Configuration Next 265 Server Settings The Server tab allows you to configure basic server settings. The default settings for these options are appropriate for most situations. Figure 14-14. Server Configuration The Lock File value corresponds to the LockFile directive. This directive sets the path to the lockfile used when Apache is compiled with either USE_FCNTL_SERIALIZED_ACCEPT or USE_FLOCK_SERIALIZED_ACCEPT. It must be stored on the local disk. IT should be left to the default value unless the logs directory is located on an NFS share. If this is the case, the default value should be changed to a location on the local disk and to a directory that is readable only by root. The PID File value corresponds to the PidFile directive. This directive sets the file in which the server records its process ID (pid). This file should only be readable by root. In most cases, it should be left to the default value. The Core Dump Directory value corresponds to the CoreDumpDirectory directive. Apache tries to switch to this directory before dumping core. The default value is the ServerRoot. However, if the user that the server runs as can not write to this directory, the core dump can not be written. Change this value to a directory writable by the user the server runs as, if you want to write the core dumps to disk for debugging purposes. The User value corresponds to the User directive. It sets the userid used by the server to answer requests. This user's settings determine the server's access. Any files inaccessible to this user will also be inaccessible to your website's visitors. The default for User is apache. The User should only have privileges so that it can access files which are supposed to be visible to the outside world. The User is also the owner of any CGI processes spawned by the server. The User should not be allowed to execute any code which is not intended to be in response to HTTP requests. Warning Unless you know exactly what you are doing, do not set the User to root. Using root as the User will create large security holes for your Web server. The parent httpd process first runs as root during normal operations, but is then immediately handed off to the apache user. The server must start as root because it 266 needs to bind to a port below 1024. Ports below 1024 are reserved for system use, so they can not be used by anyone but root. Once the server has attached itself to its port, however, it hands the process off to the apache user before it accepts any connection requests. The Group value corresponds to the Group directive. The Group directive is similar to the User. The Group sets the group under which the server will answer requests. The default Group is also apache. Prev Home Virtual Hosts Settings docs Next Up Performance Tuning google:redhat Search Docs: Go utf-8 Red Hat Docs > Manuals > Red Hat Linux Manuals > Red Hat Linux 7.2 > Red Hat Linux 7.2: The Official Red Hat Linux Customization Guide Prev Next Chapter 14. Apache Configuration Apache Configuration Tool requires the X Window System and root access. To start Apache Configuration Tool, use one of the following methods: On the GNOME desktop, go to the Main Menu Button (on the Panel) => Programs => System => Apache Configuration. On the KDE desktop, go to the Main Menu Button (on the Panel) => Red Hat => System => Apache Configuration. Type the command apacheconf at a shell prompt (for example, in an XTerm or GNOME-terminal). Do Not Edit httpd.conf 267 Do not edit the /etc/httpd/conf/httpd.conf Apache configuration file if you wish to use this tool. Apache Configuration Tool generates this file after you save your changes and exit the program. If you want to add additional modules or configuration options that are not available in Apache Configuration Tool, you cannot use this tool. Apache Configuration Tool allows you to configure the /etc/httpd/conf/httpd.conf configuration file for your Apache Web server. It does not use the old srm.conf or access.conf configuration files; leave them empty. Through the graphical interface, you can configure Apache directives such as virtual hosts, logging attributes, and maximum number of connections. Only modules that are shipped with Red Hat Linux can be configured with Apache Configuration Tool. If additional modules are installed, they can not be configured using this tool. The general steps for configuring the Apache Web Server using the Apache Configuration Tool are as following: 1. Configure the basic settings under the Main tab. 2. Click on the Virtual Hosts tab and configure the default settings. 3. Under the Virtual Hosts tab, configure the Default Virtual Host. 4. If you want to serve more than one URL or virtual host, add the additional virtual hosts. 5. Configure the server settings under the Server tab. 6. Configure the connections settings under the Performance Tuning tab. 7. Copy all necessary files to the DocumentRoot and cgi-bin directories, and save your settings in the Apache Configuration Tool. Basic Settings Use the Main tab to configure the basic server settings. Figure 14-1. Basic Settings 268 Enter a fully qualified domain name that you have the right to use in the Server Name text area. This option corresponds to the ServerName directive in httpd.conf. The ServerName directive sets the hostname of the Web server. It is used when creating redirection URLs. If you do not define a Server Name, Apache attempts to resolve it from the IP address of the system. The Server Name does not have to be the domain name resolved from the IP address of the server. For example, you might want to set the Server Name to www.your_domain.com when your server's real DNS name is actually foo.your_domain.com. Enter the email address of the person who maintains the Web server in the Webmaster email address text area. This option corresponds to the ServerAdmin directive in httpd.conf. If you configure the server's error pages to contain an email address, this email address will be used so that users can report a problem by sending email to the server's administrator. The default value is root@localhost. Use the Available Addresses area to define the ports on which Apache will accept incoming requests. This option corresponds to the Listen directive in httpd.conf. By default, Red Hat configures Apache to listen to ports 80 and 8080 for nonsecure Web communications. Click the Add button to define additional ports on which to accept requests. A window as shown in Figure 14-2 will appear. Either choose the Listen to all addresses option to listen to all IP addresses on the defined port or specify a particular IP address over which the server will accept connections in the Address field. Only specify one IP address per port number. If you want to specify more than one IP address with the same port number, create an entry for each IP address. If at all possible, use an IP address instead of a domain name to prevent a DNS lookup failure. Refer to http://httpd.apache.org/docs/dns-caveats.html for more information about Issues Regarding DNS and Apache. Entering an asterisk (*) in the Address field is the same as choosing Listen to all addresses. Clicking the Edit button shows the same window as the Add button except with the fields populated for the selected entry. To delete an entry, select it and click the Delete button. Figure 14-2. Available Addresses Tip If you set Apache to listen to a port under 1024, you must be root to start it. For port 1024 and above, httpd can be started as a regular user. 269 Prev Home Additional Resources docs Next Up Default Settings google:redhat Search Docs: Go utf-8 Red Hat Docs > Manuals > Red Hat Linux Manuals > Red Hat Linux 7.2 > Red Hat Linux 7.2: The Official Red Hat Linux Customization Guide Prev Chapter 14. Apache Configuration Next Performance Tuning Click on the Performance Tuning tab to configure the maximum number of child server processes you want and to configure the Apache options for client connections. The default settings for these options are appropriate for most situations. Altering these settings may affect the overall performance of your Web server. Figure 14-15. Performance Tuning Set Max Number of Connections to the maximum number of simultaneous client requests that the server will handle. For each connection, a child httpd process is created. After this maximum number of process is reached, no one else will be able to connect to the Web server until a child server process is freed. You can not set this value to higher than 256 without recompiling Apache. This option corresponds to the MaxClients directive. Connection Timeout defines, in seconds, the amount of time that your server will wait for receipts and transmissions during communications. Specifically, Connection Timeout defines how long your server will wait to receive a GET request, how long it will wait to receive TCP packets on a POST or PUT request and how long it will wait between ACKs responding to TCP packets. By default, 270 Connection Timeout is set to 300 seconds, which is appropriate for most situations. This option corresponds to the TimeOut directive. Set the Max requests per connection to the maximum number of requests allowed per persistent connection. The default value is 100, which should be appropriate for most situations. This option corresponds to the MaxRequestsPerChild directive. If you check the Allow unlimited requests per connection option, the MaxKeepAliveRequests directive to 0, and unlimited requests are allowed. If you uncheck the Allow Persistent Connections option, the KeepAlive directive is set to false. If you check it, the KeepAlive directive is set to true, and the KeepAliveTimeout directive is set to the number that is selected as the Timeout for next Connection value. This directive sets the number of seconds your server will wait for a subsequent request, after a request has been served, before it closes the connection. Once a request has been received, the Connection Timeout value applies instead. Setting the Persistent Connections to a high value may cause a server to slow down, depending on how many users are trying to connect to it. The higher the number, the more server processes waiting for another connection from the last client that connected to it. Prev Home Server Settings docs Next Up Saving Your Settings google:redhat Search Docs: Go utf-8 Red Hat Docs > Manuals > Red Hat Linux Manuals > Red Hat Linux 7.2 > Red Hat Linux 7.2: The Official Red Hat Linux Customization Guide Prev Chapter 14. Apache Configuration Next 271 Virtual Hosts Settings You can use Apache Configuration Tool to configure virtual hosts. Virtual hosts allow you to run different servers for different IP addresses, different host names, or different ports on the same machine. For example, you can run the website for http://www.your_domain.com and http://www.your_second_domain.com on the same Apache server using virtual hosts. This option corresponds to the <VirtualHost> directive for the default virtual host and IP based virtual hosts. It corresponds to the <NameVirtualHost> directive for a name based virtual host. The Apache directives set for a virtual host only apply to that particular virtual host. If a directive is set server-wide using the Edit Default Settings button and not defined within the virtual host settings, the default setting is used. For example, you can define a Webmaster email address in the Main tab and not define individual email addresses for each virtual host. Apache Configuration Tool includes a default virtual host as shown in Figure 148. Refer to the section called Default Virtual Host for details about the default virtual host. Figure 14-8. Virtual Hosts The Apache documentation on your machine or on the Web at http://www.apache.org/docs/vhosts/ provides more information about virtual hosts. Adding and Editing a Virtual Host To add a virtual host, click the Virtual Hosts tab and then click the Add button. The window as shown in Figure 14-9 appears. You can also edit a virtual host by selecting it in the list and clicking the Edit button. Figure 14-9. Virtual Hosts Configuration General Options The General Options settings only apply to the virtual host that you are configuring. Set the name of the Virtual Host in the Virtual Host Name text area. This name is used by Apache Configuration Tool to distinguish between virtual hosts. 272 Set the Document Root Directory value to the directory that contains the root document (such as index.html) for the virtual host. This option corresponds to the DocumentRoot directive within the VirtualHost directive. Before Red Hat Linux 7.0, Apache provided with Red Hat Linux used /home/httpd/html as the DocumentRoot. In Red Hat Linux 7.2, however, the default DocumentRoot is /var/www/html. The Webmaster email address corresponds to the ServerAdmin directive within the VirtualHost directive. This email address is used in the footer of error pages if you choose to show a footer with an email address on the error pages. In the Host Information section, choose Default Virtual Host, IP based Virtual Host, or Name based Virtual Host. Default Virtual Host If you choose Default Virtual Host, Figure 14-10 appears. You should only configure one default virtual host. The default virtual host settings are used when the requested IP address is not explicitly listed in another virtual host. If there is no default virtual host defined, the main server settings are used. Figure 14-10. Default Virtual Hosts IP based Virtual Host If you choose IP based Virtual Host, Figure 14-11 appears to configure the <VirtualHost> directive based on the IP address of the server. Specify this IP address in the IP address field. To specify more than one IP address, separate each IP address with spaces. To specify a port, use the syntax IP Address:Port. Use :* to configure all ports for the IP address. Specify the host name for the virtual host in the Server Host Name field. Figure 14-11. IP Based Virtual Hosts Name based Virtual Host If you choose Name based Virtual Host, Figure 14-12 appears to configure the NameVirtualHost Directive based on the host name of the server. Specify the IP address in the IP address field. To specify more than one IP address, separate each IP address with spaces. To specify a port, use the syntax IP Address:Port. 273 Use :* to configure all ports for the IP address. Specify the host name for the virtual host in the Server Host Name field. In the Aliases section, click Add to add a host name alias. Adding an alias here adds a ServerAlias directive within the NameVirtualHost Directive. Figure 14-12. Name Based Virtual Hosts SSL Note You can not use name based virtual hosts with SSL, because the SSL handshake (when the browser accepts the secure Web server's certificate) occurs before the HTTP request which identifies the appropriate name based virtual host. If you want to use name-based virtual hosts, they will only work with your non-secure Web server. If an Apache server is not configured with SSL support, communications between an Apache server and its clients are not encrypted. This is appropriate for websites without personal or confidential information. For example, an open source website that distributes open source software and documentation has no need for secure communications. However, an ecommerce website that requires credit card information should use the Apache SSL support to encrypt its communications. Enabling Apache SSL support enables the use of the mod_ssl security module. To enable it through Apache Configuration Tool you must allow access through port 443 under the Main tab => Available Addresses. Refer to the section called Basic Settings for details. Then, select the virtual host name in the Virtual Hosts tab, click the Edit button, choose SSL from the left-hand menu, and check the Enable SSL Support option as shown in Figure 14-13. The SSL Configuration section is pre-configured with the dummy digital certificate. The digital certificate provides authentication for your secure Web server and identifies the secure server to client Web browsers. You must purchase your own digital certificate. Do not use the dummy one provided in Red Hat Linux for your website. For details on purchasing a CA-approved digital certificate, refer to the Chapter 15. Figure 14-13. SSL Support 274 Additional Virtual Host Options The Site Configuration, Environment Variables, and Directories options for the virtual hosts are the same directives that you set when you clicked the Edit Default Settings button, except the options set here are for the individual virtual hosts that you are configuring. Refer to the section called Default Settings for details on these options. Prev Home Default Settings docs Next Up Server Settings google:redhat Search Docs: Go utf-8 Red Hat Docs > Manuals > Red Hat Linux Manuals > Red Hat Linux 7.2 > Red Hat Linux 7.2: The Official Red Hat Linux Customization Guide Prev Chapter 14. Apache Configuration Next Default Settings After defining the Server Name, Webmaster email address, and Available Addresses, click the Virtual Hosts tab and click the Edit Default Settings button. The window shown in Figure 14-3 will appear. Configure the default settings for your Web server in this window. If you add a virtual host, the settings you configure for the virtual host take precedence for that virtual host. For a directive not defined within the virtual host settings, the default value is used. Site Configuration The default values for the Directory Page Search List and Error Pages will work for most servers. If you are unsure of these settings, do not modify them. 275 Figure 14-3. Site Configuration The entries listed in the Directory Page Search List define the DirectoryIndex directive. The DirectoryIndex is the default page served by the server when a user requests an index of a directory by specifying a forward slash (/) at the end of the directory name. For example, when a user requests the page http://your_domain/this_directory/, they are going to get either the DirectoryIndex page if it exists, or a server-generated directory list. The server will try to find one of the files listed in the DirectoryIndex directive and will return the first one it finds. If it doesn't find any of these files and if Options Indexes is set for that directory, the server will generate and return a list, in HTML format, of the subdirectories and files in the directory. Use the Error Code section to configure Apache to redirect the client to a local or external URL if the event of a problem or error. This option corresponds to the ErrorDocument directive. If a problem or error occurs when a client tries to connect to the Apache Web server, the default action is to display the short error message shown in the Error Code column. To override this default configuration, select the error code and click the Edit button. Choose Default to display the default short error message. Choose URL to redirect the client to an external URL and enter a complete URL including the http:// in the Location field. Choose File to redirect the client to an internal URL and enter a file under the Document Root for the Web server. The location must begin the a slash (/) and be relative to the Document Root. For example, to redirect a 404 Not Found error code to a Web page that you created in a file called 404.html, copy 404.html to DocumentRoot/errors/404.html. In this case, DocumentRoot is the Document Root directory that you have defined (the default is /var/www/html). Then, choose File as the Behavior for 404 - Not Found error code and enter /errors/404.html as the Location. From the Default Error Page Footer menu, you can choose one of the following options: Show footer with email address — Display the default Apache footer at the bottom of all error pages along with the email address of the website maintainer specified by the ServerAdmin directive. Refer to the section called General Options for information about configuring the ServerAdmin directive. 276 Show footer — Display just the default Apache footer at the bottom of error pages. No footer — Do not display a footer at the bottom of error pages. Logging By default, Apache writes the transfer log to the file /var/log/httpd/access_log and the error log to the file /var/log/httpd/error_log. Figure 14-4. Logging The transfer log contains a list of all attempts to access the Web server. It records the IP address of the client that is attempting to connect, the date and time of the attempt, and the file on the Web server that it is trying to retrieve. Enter the name of the path and file in which to store this information. If the path and filename does not start with a slash (/), the path is relative to the server root directory as configured. This option corresponds to the TransferLog directive. You can configure a custom log format by checking Use custom logging facilities and entering a custom log string in the Custom Log String field. This configures the LogFormat directive. Refer to http://httpd.apache.org/docs/mod/mod_log_config.html#formats for details on the format of this directive. The error log contains a list of any server errors that occur. Enter the name of the path and file in which to store this information. If the path and filename does not start with a slash (/), the path is relative to the server root directory as configured. This option corresponds to the ErrorLog directive. Use the Log Level menu to set how verbose the error messages in the error logs will be. It can be set (from least verbose to most verbose) to emerg, alert, crit, error, warn, notice, info or debug. This option corresponds to the LogLevel directive. The value chosen with the Reverse DNS Lookup menu defines the HostnameLookups directive. Choosing No Reverse Lookup sets the value to off. Choosing Reverse Lookup sets the value to on. Choosing Double Reverse Lookup sets the value to double. If you choose Reverse Lookup, your server will automatically resolve the IP address for each connection which requests a document from your Web server. 277 Resolving the IP address means that your server will make one or more connections to the DNS in order to find out the hostname that corresponds to a particular IP address. If you choose Double Reverse Lookup, your server will perform a doublereverse DNS. In other words, after a reverse lookup is performed, a forward lookup is performed on the result. At least one of the IP addresses in the forward lookup must match the address from the first reverse lookup. Generally, you should leave this option set to No Reverse Lookup, because the DNS requests add a load to your server and may slow it down. If your server is busy, the effects of trying to perform these reverse lookups or double reverse lookups may be quite noticeable. Reverse lookups and double reverse lookups are also an issue for the Internet as a whole. All of the individual connections made to look up each hostname add up. Therefore, for your own Web server's benefit, as well as for the Internet's benefit, you should leave this option set to No Reverse Lookup. Environment Variables Apache can use the mod_env module to configure the environment variables which are passed to CGI scripts and SSI pages. Use the Environment Variables page to configure the directives for this Apache module. Figure 14-5. Environment Variables Use the Set for CGI Scripts section to set an environment variable that is passed to CGI scripts and SSI pages. For example, to set the environment variable MAXNUM to 50, click the Add button inside the Set for CGI Script section as shown in the section called Environment Variables and type MAXNUM in the Environment Variable text field and 50 in the Value to set text field. Click OK. The Set for CGI Scripts section configures the SetEnv directive. Use the Pass to CGI Scripts section to pass the value of an environment variable when Apache was first started to CGI scripts. To see this environment variable, type the command env at a shell prompt. Click the Add button inside the Pass to CGI Scripts section and enter the name of the environment variable in the resulting dialog box. Click OK. The Pass to CGI Scripts section configures the PassEnv directive. 278 If you want to remove an environment variable so that the value is not passed to CGI scripts and SSI pages, use the Unset for CGI Scripts section. Click Add in the Unset for CGI Scripts section, and enter the name of the environment variable to unset. This corresponds to the UnsetEnv directive. Directories Use the Directories page to configure options for specific directories. This corresponds to the <Directory> directive. Figure 14-6. Directories Click the Edit button in the top right-hand corner to configure the Default Directory Options for all directories that are not specified in the Directory list below it. The options that you choose are listed as the Options directive within the <Directory> directive. You can configure the following options: ExecCGI — Allow execution of CGI scripts. CGI scripts are not executed if this option is not chosen. FollowSymLinks — Allow symbolic links to be followed. Includes — Allow server-side includes. IncludesNOEXEC — Allow server-side includes, but disable the #exec and #include commands in CGI scripts. Indexes — Display a formatted list of the directory's contents, if no DirectoryIndex (such as index.html) exists in the requested directory. Multiview — Support content-negotiated multiviews; this option is disabled by default. SymLinksIfOwnerMatch — Only follow symbolic links if the target file or directory has the same owner as the link. To specify options for specific directories, click the Add button beside the Directory list box. The window shown in Figure 14-7 appears. Enter the directory to configure in the Directory text field at the bottom of the window. Select the options in the right-hand list, and configure the Order directive with the left-hand side options. The Order directive controls the order in which allow and deny 279 directives are evaluated. In the Allow hosts from and Deny hosts from text field, you can specify one of the following: Allow all hosts — Type all to allow access to all hosts. Partial domain name — Allow all hosts whose names match or end with the specified string. Full IP address — Allow access to a specific IP address. A subnet — Such as 192.168.1.0/255.255.255.0 A network CIDR specification — such as 10.3.0.0/16 Figure 14-7. Directory Settings If you check the Let .htaccess files override directory options, the configuration directives in the .htaccess file take precedence. Prev Home Apache Configuration Next Up Home Virtual Hosts Settings Solutions Knowledge Base FAQs E-mail Live Support Chat Disclaimer: This manual is NOT a Squid tutorial. It is only a reference material that provides detailed explanation of all configuration parameters available in Squid 2.4. The reader is expected to have prior knowledge of basic Squid installation and configuration. For Complete tutorial on Squid, please visit http://www.squid-cache.org Squid 2.4 Configuration Manual <<Back << Contents >> ACCESS CONTROLS Tag Name acl Next>> 280 Usage acl aclname acltype string1 ... | "file" Description This tag is used for defining an access List. When using "file" the file should contain one item per line By default, regular expressions are CASESENSITIVE. To make them case-insensitive, use the -i option. Acl Type: src Description This will look client IP Address. Usage acl aclname src ip-address/netmask. Example 1.This refers to the whole Network with address 172.16.1.0 - acl aclname src 172.16.1.0/24 2.This refers specific single IP Address - acl aclname src 172.16.1.25/32 3.This refers range of IP Addresses from 172.16.1.25-172.16.1.35 - acl aclname src 172.16.1.25-172.16.1.35/32 Note While giving Netmask caution must be exerted in what value is given Acl Type: dst Description This is same as src with only difference refers Server IPaddress. First Squid will dns-lookup for IPAddress from the domain-name, which is in request header. Then this acl is interpreted. Usage acl aclname dst ip-address/netmask. 281 Acl Type: srcdomain Description Since squid needs to reverse dns lookup (from client ip-address to client domain-name) before this acl is interpreted, it can cause processing delays. This lookup adds some delay to the request. Usage acl aclname srcdomain domain-name Example acl aclname srcdomain .kovaiteam.com Note Here "." is more important. Acl Type: dstdomain Description This is the effective method to control specific domain Usage acl aclname dstdomain domain-name Example acl aclname dstdomain .kovaiteam.com Hence this looks for *.kovaiteam.com from URL Hence this looks for *.kovaiteam.com from URL Note Here "." is more important. 282 Acl Type: srcdom_regex Description Since squid needs to reverse dns lookup (from client ip-address to client domain-name) before this acl is interpreted, it can cause processing delays. This lookup adds some delay to the request Usage acl aclname srcdom_regex pattern Example acl aclname srcdom_regex kovai Hence this looks for the word kovai from the client domain name Note Better avoid using this acl type to be away from latency. Acl Type: dstdom_regex Description This is also an effective method as dstdomain Usage acl aclname dstdom_regex pattern Example acl aclname dstdom_regex kovai Hence this looks for the word kovai from the client domain name Acl Type: time Description Time of day, and day of week Usage acl aclname time [day-abbreviations] [h1:m1-h2:m2] day-abbreviations: S - Sunday M - Monday T - Tuesday W - Wednesday H - Thursday F - Friday 283 A - Saturday h1:m1 must be less than h2:m2 Example acl ACLTIME time M 9:00-17:00 ACLTIME refers day of Monday from 9:00 to 17:00. Acl Type: url_regex Description The url_regex means to search the entire URL for the regular expression you specify. Note that these regular expressions are case-sensitive. To make them case-insensitive, use the -i option. Usage acl aclname url_regex pattern Example acl ACLREG url_regex cooking ACLREG refers to the url containing "cooking" not "Cooking" Acl Type: urlpath_regex Description The urlpath_regex regular expression pattern matching from URL but without protocol and hostname. Note that these regular expressions are casesensitive Usage acl aclname urlpath_regex pattern Example acl ACLPATHREG urlpath_regex cooking ACLPATHREG refers only containing "cooking'' not "Cooking"; and without referring protocol and hostname. If URL is http://www.visolve.com/folder/subdir/cooking/first.html then this acltype only looks after http://www.visolve.com . In other words, if URL is http://www.visolve.com/folder/subdir/cooking/first.html then this acltype's regex must match /folder/subdir/cooking/first.html . Acl Type: port Description 284 Access can be controlled by destination (server) port address Usage acl aclname port port-no Example This example allows http_access only to the destination 172.16.1.115:80 from network 172.16.1.0 acl acceleratedhost dst 172.16.1.115/255.255.255.255 acl acceleratedport port 80 acl mynet src 172.16.1.0/255.255.255.0 http_access allow acceleratedhost acceleratedport mynet http_access deny all Acl Type: proto Description This specifies the transfer protocol Usage acl aclname proto protocol Example acl aclname proto HTTP FTP This refers protocols HTTP and FTP Acl Type: method Description This specifies the type of the method of the request Usage acl aclname method method-type Example acl aclname method GET POST This refers get and post methods only Acl Type: browser Description Regular expression pattern matching on the request's user-agent header 285 Usage acl aclname browser pattern Example acl aclname browser MOZILLA This refers to the requests, which are coming from the browsers who have "MOZILLA" keyword in the user-agent header. Acl Type: ident Description String matching on the user's name Usage acl aclname ident username ... Example You can use ident to allow specific users access to your cache. This requires that an ident server process runs on the user's machine(s). In your squid.conf configuration file you would write something like this: ident_lookup on acl friends ident kim lisa frank joe http_access allow friends http_access deny all Acl Type: ident_regex Description Regular expression pattern matching on the user's name. String match on ident output. Use REQUIRED to accept any non-null ident Usage acl aclname ident_regex pattern Example You can use ident to allow specific users access to your cache. This requires that an ident server process run on the user's machine(s). In your squid.conf configuration file you would write something like this: ident_lookup on acl friends ident_regex joe This looks for the pattern "joe" in username 286 Acl Type: src_as Description source (client) Autonomous System number Acl Type: dst_as Description destination (server) Autonomous System number Acl Type: proxy_auth Description User authentication via external processes. proxy_auth requires an EXTERNAL authentication program to check username/password combinations (see authenticate_program ). Usage acl aclname proxy_auth username... use REQUIRED instead of username to accept any valid username Example acl ACLAUTH proxy_auth usha venkatesh balu deepa This acl is for authenticating users usha, venkatesh, balu and deepa by external programs. Warning proxy_auth can't be used in a transparent proxy. It collides with any authentication done by origin servers. It may seem like it works at first, but it doesn't. When a Proxy-Authentication header is sent but it is not needed during ACL checking the username is NOT logged in access.log. 287 Acl Type: proxy_auth_regex Description This is same as proxy_auth with a difference. That is it matches the pattern with usernames, which are given in authenticate_program Usage acl aclname proxy_auth_regex [-i] pattern... Acl Type: snmp_community Description SNMP community string matching Example acl aclname snmp_community public snmp_access aclname Acl Type: maxconn Description A limit on the maximum number of connections from a single client IP address. It is an ACL that will be true if the user has more than maxconn connections open. It is used in http_access to allow/deny the request just like all the other acl types. Example acl someuser src 1.2.3.4 acl twoconn maxconn 5 http_access deny someuser twoconn http_access allow !twoconn Note maxconn acl requires client_db feature, so if you disabled that (client_db off) maxconn won't work. Acl Type: req_mime_type Usage acl aclname req_mime_type pattern Description 288 Regular expression pattern matching on the request content-type header Example acl aclname req_mime_type text This acl looks for the pattern "text" in request mime header Acl Type: arp Usage acl aclname arp ARP-ADDRESS Description Ethernet (MAC) address matching This acl is supported on Linux, Solaris, and probably BSD variants. To use ARP (MAC) access controls, you first need to compile in the optional code. Do this with the --enable-arp-acl configure option: % ./configure --enable-arp-acl ... % make clean % make If everything compiles, then you can add some ARP ACL lines to your squid.conf Default acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl SSL_ports port 443 563 acl Safe_ports port 80 21 443 563 70 210 102565535 acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT Example acl ACLARP arp 11:12:13:14:15:16 ACLARP refers MACADDRESS of the ethernet 11:12:13:14:15:16 Note Squid can only determine the MAC address for clients that are on the same subnet. If the client is on a different subnet, then Squid cannot find out its 289 MAC address. Tag Name http_access Usage http_access allow|deny [!]aclname ... Description Allowing or denying http access based on defined access lists If none of the "access" lines cause a match, the default is the opposite of the last line in the list. If the last line was deny, then the default is allow. Conversely, if the last line is allow, the default will be deny. For these reasons, it is a good idea to have a "deny all" or "allow all" entry at the end of your access lists to avoid potential confusion 290 Default http_access http_access http_access http_access http_access allow manager localhost deny manager deny !Safe_ports deny CONNECT !SSL_ports deny all If there are no "access" lines present, the default is to allow the request Example 1. To allow http_access for only one machine with MAC Address 00:08:c7:9f:34:41 2. To restrict access to work hours (9am - 5pm, Monday to Friday) from IP 192.168.2/24 3. Can i use multitime access control list for different users for different timing 4. Rules are read from top to bottom Caution The deny all line is very important. After all the http_access rules, if access isn't denied, it's ALLOWED !! So, specifying a LOT of http_access allow rules, and forget the deny all after them, is the same of NOTHING. If access isn't allowed by one of your rules, the default action ( ALLOW ) will be triggered. So, don't forget the deny all rule AFTER all the rules. And, finally, don't forget rules are read from top to bottom. The first rule matched will be used. Other rules won't be applied. Click here to See examples. Tag Name icp_access Usage icp_access allow|deny [!]aclname ... Description icp_access allow|deny [!]aclname ... Default icp_access deny all Example icp_access allow all - Allow ICP queries from everyone 291 Tag Name miss_access Usage miss_access allow|deny [!]aclname... Description Used to force your neighbors to use you as a sibling instead of a parent. For example: acl localclients src 172.16.0.0/16 miss_access allow localclients miss_access deny !localclients This means that only your local clients are allowed to fetch MISSES and all other clients can only fetch HITS. Default By default, allow all clients who passed the http_access rules to fetch MISSES from us. miss_access allow all Tag Name cache_peer_access Usage cache_peer_access cache-host allow|deny [!]aclname ... Description Similar to 'cache_peer_domain ' but provides more flexibility by using ACL elements. The syntax is identical to 'http_access' and the other lists of ACL elements. See 'http_access ' for further reference. Default none Example The following example could be used, if we want all requests from a specific IP address range to go to a specific cache server (for accounting purposes, for example). Here, all the requests from the 10.0.1.* range are passed to proxy.visolve.com, but all other requests are handled directly. Using acls to select peers, acl myNet src 10.0.0.0/255.255.255.0 acl cusNet src 10.0.1.0/255.255.255.0 acl all src 0.0.0.0/0.0.0.0 292 cache_peer proxy.visolve.com parent 3128 3130 cache_peer_access proxy.visolve.com allow custNet cache_peer_access proxy.visolve.com deny all Tag Name proxy_auth_realm Usage proxy_auth_realm string Description Specifies the realm name, which is to be reported to the client for proxy authentication (part of the text the user will see when prompted for the username and password). Default proxy_auth_realm Squid proxy-caching web server Example proxy_auth_realm My Caching Server Tag Name ident_lookup_access Usage ident_lookup_access allow|deny aclname... Description A list of ACL elements, which, if matched, cause an ident (RFC 931) lookup to be performed for this request. For example, you might choose to always perform ident lookups for your main multi-user Unix boxes, but not for your Macs and PCs Default By default, ident lookups are not performed for any requests ident_lookup_access deny all Example To enable ident lookups for specific client addresses, you can follow this example: acl ident_aware_hosts src 198.168.1.0/255.255.255.0 ident_lookup_access allow ident_aware_hosts ident_lookup_access deny all Caution This option may be disabled by using --disable-ident with the configure script. Examples: 293 (1) To allow http_access for only one machine with MAC Address 00:08:c7:9f:34:41 To use MAC address in ACL rules. Configure with option -enable-arp-acl. acl all src 0.0.0.0/0.0.0.0 acl pl800_arp arp 00:08:c7:9f:34:41 http_access allow pl800_arp http_access deny all (2) To restrict access to work hours (9am - 5pm, Monday to Friday) from IP 192.168.2/24 acl ip_acl src 192.168.2.0/24 acl time_acl time M T W H F 9:00-17:00 http_access allow ip_acl time_acl http_access deny all (3) Can i use multitime access control list for different users for different timing. AclDefnitions acl acl acl acl acl acl abc src 172.161.163.85 xyz src 172.161.163.86 asd src 172.161.163.87 morning time 06:00-11:00 lunch time 14:00-14:30 evening time 16:25-23:59 Access Controls http_access allow abc morning http_access allow xyz morning lunch http_access allow asd lunch This is wrong. The description follows: Here access line "http_access allow xyz morning lunch" will not work. So ACLs are interpreted like this ... http_access RULE statement1 AND statement2 AND statement3 OR http_access ACTION statement1 AND statement2 AND statement3 OR ........ 294 So, the ACL "http_access allow xyz morning lunch" will never work, as pointed, because at any given time, morning AND lunch will ALWAYS be false, because both morning and lunch will NEVER be true at the same time. As one of them is false, and acl uses AND logical statement, 0/1 AND 0 will always be 0 (false). That's because this line is in two. If now read: http_access allow xyz AND morning OR http_access allow xyz lunch If request comes from xyz, and we're in one of the allowed time, one of the rules will match TRUE. The other will obviously match FALSE. TRUE OR FALSE will be TRUE, and access will be permitted. Finally Access Control looks... http_access http_access http_access http_access http_access allow abc allow xyz allow xyz allow asd deny all morning morning lunch lunch (4) Rules are read from top to bottom. The first rule matched will be used. Other rules won't be applied. Example: http_access allow xyz morning http_access deny xyz http_access allow xyz lunch If xyz tries to access something in the morning, access will be granted. But if he tries to access something at lunchtime, access will be denied. It will be denied by the deny xyz rule, that was matched before the 'xyz lunch' rule. <<Back All << Contents >> rights Next>> reserved. ©ViSolve.com 2006 All trademarks used in this document are owned by their respective companies. This document makes no ownership claim of any trademark(s). If you wish to have your trademark removed from this document, please contact the copyright holder. No disrespect is meant by any use of other companies? Created By: [email protected] 295 trademarks in this document. Revision No:0.0 Note: The pages on this website cannot be duplicated on to another site. Copying and usage of the contents for personal and corporate purposes is acceptable. In near future, it will be released under the GNU Free Documentation License. Tom's Guide | Tom's Hardware | Tom's Games | PC Safety Suite Computing.Net Specialty Forums o Security and Virus o General Hardware o CPUs/Overclocking o Networking o iPad/iPhone/iPod o Digital Photo/Video o Office Software o PC Gaming Last Modified By: ViSolve 296 o Console Gaming o Programming o Database o Web Development o Digital Home o Drivers General Forums o Windows XP o Windows Vista o Windows 7 o Windows 95/98 o Windows Me o 297 Windows NT o Windows 2000 o Win Server 2008 o Win Server 2003 o Windows 3.1 o Linux o PDAs o BeOS o Novell Netware o OpenVMS o Solaris o Disk Op. System o Unix 298 o Mac o OS/2 o The Lounge Products o Components o Digital Cameras o Laptops o Memory o Monitors o Printers o Storage Devices o Video Cards o 299 Wireless Networking o See More Geek.dict How-tos Drivers Login Register concept sp Linux Security & Virus | Hardware | Windows XP | Windows Vista | PC Gaming Ask Question Computing.Net > Forums > Linux > Block MSN with Squid Computer Problems? Computing.Net has over 1,000,000 posts about all things technology related! Click here to start participating now! Also, check out the New User Guide. Block MSN with Squid Recommended Fix: Click here to fix Windows errors and optimize PC performance [sponsored] Jeronimox February 14, 2006 at 05:19:56 Pacific OS: Debian sarge 3.1 300 CPU/Ram: 800 MHz Product: clon Dear all, is it possible to block MSN sessions from my LAN's users using only a SQUID proxy connected to Internet ??? Any idea ???? Thanks a lot Jeronimox Google Ads Home About Forum ssssaaaaaa Howtos & FAQs Low graphics Shell Scripts RSS/Feed nixcraft - insight into linux admin work Linux: Setup a transparent proxy with Squid in three easy steps by LinuxTitli · 230 comments 301 Y'day I got a chance to play with Squid and iptables. My job was simple : Setup Squid proxy as a transparent server. Main benefit of setting transparent proxy is you do not have to setup up individual browsers to work with proxies. My Setup: i) System: HP dual Xeon CPU system with 8 GB RAM (good for squid). ii) Eth0: IP:192.168.1.1 iii) Eth1: IP: 192.168.2.1 (192.168.2.0/24 network (around 150 windows XP systems)) iv) OS: Red Hat Enterprise Linux 4.0 (Following instruction should work with Debian and all other Linux distros) Eth0 connected to internet and eth1 connected to local lan i.e. system act as router. Server Configuration Step #1 : Squid configuration so that it will act as a transparent proxy Step #2 : Iptables configuration o a) Configure system as router o b) Forward all http requests to 3128 (DNAT) Step #3: Run scripts and start squid service First, Squid server installed (use up2date squid) and configured by adding following directives to file: # vi /etc/squid/squid.conf Modify or add following squid directives: httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on acl lan src 192.168.1.1 192.168.2.0/24 http_access allow localhost http_access allow lan Where, 302 httpd_accel_host virtual: Squid as an httpd accelerator httpd_accel_port 80: 80 is port you want to act as a proxy httpd_accel_with_proxy on: Squid act as both a local httpd accelerator and as a proxy. httpd_accel_uses_host_header on: Header is turned on which is the hostname from the URL. acl lan src 192.168.1.1 192.168.2.0/24: Access control list, only allow LAN computers to use squid http_access allow localhost: Squid access to LAN and localhost ACL only http_access allow lan: -- same as above -- Here is the complete listing of squid.conf for your reference (grep will remove all comments and sed will remove all empty lines, thanks to David Klein for quick hint ): # grep -v "^#" /etc/squid/squid.conf | sed -e '/^$/d' OR, try out sed (thanks to kotnik for small sed trick) # cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d' Output: hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY hosts_file /etc/hosts refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl purge method PURGE acl CONNECT method CONNECT cache_mem 1024 MB http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports 303 http_access deny CONNECT !SSL_ports acl lan src 192.168.1.1 192.168.2.0/24 http_access allow localhost http_access allow lan http_access deny all http_reply_access allow all icp_access allow all visible_hostname myclient.hostname.com httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on coredump_dir /var/spool/squid Iptables configuration Next, I had added following rules to forward all http requests (coming to port 80) to the Squid server port 3128 : iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT -to 192.168.1.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 Here is complete shell script. Script first configure Linux system as router and forwards all http request to port 3128 (Download the fw.proxy shell script): #!/bin/sh # squid server IP SQUID_SERVER="192.168.1.1" # Interface connected to Internet INTERNET="eth0" # Interface connected to LAN LAN_IN="eth1" # Squid port SQUID_PORT="3128" # DO NOT MODIFY BELOW # Clean old firewall iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F 304 iptables -t mangle -X # Load IPTABLES modules for NAT and IP conntrack support modprobe ip_conntrack modprobe ip_conntrack_ftp # For win xp ftp client #modprobe ip_nat_ftp echo 1 > /proc/sys/net/ipv4/ip_forward # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT ACCEPT # Unlimited access to loop back iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow UDP, DNS and Passive FTP iptables -A INPUT -i $INTERNET -m state --state ESTABLISHED,RELATED -j ACCEPT # set this system as a router for Rest of LAN iptables --table nat --append POSTROUTING --out-interface $INTERNET -j MASQUERADE iptables --append FORWARD --in-interface $LAN_IN -j ACCEPT # unlimited access to LAN iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT # DNAT port 80 request comming from LAN systems to squid 3128 ($SQUID_PORT) aka transparent proxy iptables -t nat -A PREROUTING -i $LAN_IN -p tcp --dport 80 -j DNAT --to $SQUID_SERVER:$SQUID_PORT # if it is same system iptables -t nat -A PREROUTING -i $INTERNET -p tcp --dport 80 -j REDIRECT --to-port $SQUID_PORT # DROP everything and Log it iptables -A INPUT -j LOG iptables -A INPUT -j DROP Save shell script. Execute script so that system will act as a router and forward the ports: # # # # chmod +x /etc/fw.proxy /etc/fw.proxy service iptables save chkconfig iptables on 305 Start or Restart the squid: # /etc/init.d/squid restart # chkconfig squid on Desktop / Client computer configuration Point all desktop clients to your eth1 IP address (192.168.2.1) as Router/Gateway (use DHCP to distribute this information). You do not have to setup up individual browsers to work with proxies. How do I test my squid proxy is working correctly? See access log file /var/log/squid/access.log: # tail -f /var/log/squid/access.log Above command will monitor all incoming request and log them to /var/log/squid/access_log file. Now if somebody accessing a website through browser, squid will log information. Problems and solutions (a) Windows XP FTP Client All Desktop client FTP session request ended with an error: Illegal PORT command. I had loaded the ip_nat_ftp kernel module. Just type the following command press Enter and voila! # modprobe ip_nat_ftp Please note that modprobe command is already added to a shell script (above). (b) Port 443 redirection I had block out all connection request from our router settings except for our proxy (192.168.1.1) server. So all ports including 443 (https/ssl) request denied. You cannot redirect port 443, from debian mailing list, "Long answer: SSL is specifically designed to prevent "man in the middle" attacks, and setting up squid in such a way would be the same as such a "man in the middle" attack. You might be able to successfully achive this, but not without breaking the encryption and certification that is the point behind SSL". 306 Therefore, I had quickly reopen port 443 (router firewall) for all my LAN computers and problem was solved. (c) Squid Proxy authentication in a transparent mode You cannot use Squid authentication with a transparently intercepting proxy. Further reading: How do I use Iptables connection tracking feature? How do I build a Simple Linux Firewall for DSL/Dial-up connection? Update: Forum topic discussion: Setting up a transparent proxy with Squid peering to ISP squid server Squid, a user's guide Squid FAQ Transparent Proxy with Linux and Squid mini-HOWTO Updated for accuracy. Featured Articles: 20 Linux System Monitoring Tools Every SysAdmin Should Know 20 Linux Server Hardening Security Tips My 10 UNIX Command Line Mistakes The Novice Guide To Buying A Linux Laptop 10 Greatest Open Source Software Of 2009 Top 5 Email Client For Linux, Mac OS X, and Windows Users Top 20 OpenSSH Server Best Security Practices Top 10 Open Source Web-Based Project Management Software Top 5 Linux Video Editor Software 4000+ howtos and counting! Want to read more Linux / UNIX howtos, tips and tricks? We request you to sign up for the Daily email newsletter or Weekly newsletter and get 307 intimated about our new howtos / faqs as soon as it is released. you@address Nixcraft-LinuxFre Email this to a friend Download PDF version Printable version Comment RSS feed Last Updated: Dec/5/07 en_US Subscribe { 230 comments… read them below or add one } 1 Jay of Today May 27, 2006 you gotta be kidding, only 150 desktops and 8 gigs of RAM??????? I use to have p133 with 64megs with that setup way back then!!! bah, newschoolers SUCKS Reply 2 LinuxTitli May 27, 2006 LOL :D 8GB gives you the best performance. Squid performance = more ram + fast SCSI disk Cost of RAM : Yet another reason or factor to have a more ram. Even people started to use desktop system with 1GiB:P Reply 3 kotnik May 27, 2006 Use following sed magic to remove both comments and empty lines at the same expense: sed ‘/ *#/d; /^ *$/d’ Reply 4 LinuxTitli May 27, 2006 kotnik, 308 Nice sed trick, no need to use grep :) Appreciate your post. Reply 5 Aaron May 28, 2006 Hi, I have similar setup, only one question, How do I block Yahoo and MSN messengers (block at router or transparent proxy+iptables level) ? Cheers, Aaron Reply 6 LinuxTitli May 28, 2006 Aaron, My firewall policy @ router: Default firewall Policy: Close all door and open only required windows Block all incoming and outgoing request Open only required ports i.e. 80 (from proxy only) , 443, 21, 22, 25 etc as per requirement. This configuration automatically blocks rest of stuff. You can implement similar policy using Squid ACL or iptables. Reply 7 Scott May 29, 2006 Nice, quick, down and dirty article. :-) Aaron: http://www.mail-archive.com/[email protected]/msg38193.html will explain how to block Yahoo, MSN and other IM’s. For anyone interested, I have thrown together a HOWTO on getting Squid to work properly in conjunction with Active Directory authentication. It can be found here: http://cryptoresync.com/2006/05/18/installing-squid-withactive-directory-authentication/ Enjoy! Reply 309 8 Bill May 29, 2006 Aaron, My findings with chat networks like AIM is that, even if you block the specific ports used by the network (ie, 5190), the login server will accept connections to other ports that are common, such as 80, 25, 443, 23, etc. Your best bet for blocking chat traffic is to block the ports used by the network, as well as the IP addresses associated with the login servers, like login.oscar.aol.com. Additionally, write your internal routing rules such that only traffic passing through your proxy can reach the Internet. Otherwise, users will be able to circumvent your proxy and use a public proxy. Reply 9 Desert Zarzamora May 29, 2006 Sometime ago, i wrote another how-to, but this time for a COMPLETELY transparent proxy. That is, a bridged proxy. That a bit more esoteric stuff, but very useful if you really can’t mess with your network topology. Have a look at: http://freshmeat.net/articles/view/1433/ Reply 10 Hans May 29, 2006 I would love to run into your office, replace your server with a Pentium 200 with 128mb of RAM… you probably wouldn’t notice the difference, if all you are using it is for squid. then I would actually make some good use of the machine. I’ve got a pentium 200 doing far more (proper proxy, apache server, svn, samba, etc etc) and handles it perfectly well ??? Reply 11 LinuxTitli May 29, 2006 @Desert Zarzamora and Scott, nice tutorial (thanks for links) @Hans, heh Well to be frank I am just admin and decision regarding h/w or infrastructure made by someone else … this is how things work in an enterprise IT division (they don’t care about money as they also make 310 more money from core business so they want world class stuff). However, I agree with you about h/w requirement can be low to run other services. @Bill, Good advice there. Appreciate all of yours post and feedback :) Reply 12 Steve May 30, 2006 just wondering do wew really need quid acting as an accelerator here? nice article, and what a beast of a proxy server i think everyone else is just jealous cos they only have p1’s Reply 13 ADHDPHP June 1, 2006 Thanks LinuxTitli!!! I really appreciate you sharing your knoledge with others! Keep up the great work! KMC Reply 14 ADHDPHP June 1, 2006 Also, LinuxTitli do you have any need to use dansguardian in conjuntion with squid for conent filtering? That would probably make good use of that RAM too! Thanks again! Reply 15 massage therapy products June 1, 2006 Well, I’ll be needing to set one of these up eventually, so you’re bookmarked. I wonder how performance would be if I set up a RAID system on USB drives… Reply 16 avanish June 1, 2006 how we can config the ftp service in squid proxy 311 reply avanish gupta india Reply 17 Vivek June 1, 2006 Avanish, Add following line to config file acl ftp proto FTP http_access allow ftp If clients compters are using IE browser then Goto > Tools > Advance > and Uncheck option that reads Enable folder view for FTP-Sites. FTP proxy only work through browser and it will not work at command line. Remember squid is not a real ftp proxy. Reply 18 nesargha June 2, 2006 thank you, i had little bit problems in running the script on redhat 9 , i had remove the $lan_in etc.. and type the actual values but at last i worked fine with me nesargha india Reply 19 Aaron P June 4, 2006 Using squid transparently, you lose the ability to authenticate users (bummer). While I can understand why (to a certain degree), is there a way to just get the username for logging purposes? It’s like I’m up a (little river) without a (rowing device). I need squid for logging user hits, but I can’t do it without transparent routing. And I can’t authenticate in transparent mode due to the accelerator. Any ideas? Awesome article. Thanks! AP Reply 312 20 Vivek June 4, 2006 @Aaron, Simple answer is you cannot do both things (transparent proxy + auth). The browser has no way of knowing it is using a proxy. So, what you can do is use automatic URL configuration (i.e. no transparent proxy) with WPAD. The information for WAPD and automatic URL configuration available at official Squid FAQ: http://www.squid-cache.org/Doc/FAQ/FAQ-5.html If you find any other way then let us know… Hope this helps. @nesargha, May be because of html formatting… I will upload script as a text file so that others can use it directly (but you still need to make changes to script) Reply 21 Martin Wallace June 17, 2006 I am just a newbie, but I think there’s an error in your configuration of iptables. The lines should read : iptables -t nat -A PREROUTING -i eth1 -p tcp -–dport 80 -j DNAT -–to 192.168.1.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp –-dport 80 -j REDIRECT –to-port 3128 That is, you need –, not -, before to, to-port and dport. Correct me if I’m wrong. Martin Reply 22 Martin Wallace June 17, 2006 I see that the problem is with formatting. You need two dashes, not one, before to, to-port and dport, but they look like one (slightly longer) dasjh onm my screen. Try again: iptables -t nat -A PREROUTING -i eth1 -p tcp – –dport 80 -j DNAT – –to 192.168.1.1:3128 313 iptables -t nat -A PREROUTING -i eth0 -p tcp – –dport 80 -j REDIRECT – –to-port 3128 Reply 23 vivek June 17, 2006 Martin, I just checked the script. There is no problem. However, it looks like, HTML formatting breaks the script. Direct link to download script: http://www.cyberciti.biz/tips/wp-content/uploads/2006/06/fw.proxy.txt Hope this helps :) Reply 24 sohan July 12, 2006 i am using same rules given above , Can I block my users to use public proxy. Do i have to modify my squid.conf or Iptables Reply 25 nixcraft July 12, 2006 sohan, You just need to setup LAN ACL. If you are using above config then it only allows access from LAN. Reply 26 WebSean July 30, 2006 I am running Squid 2.5 on Macintosh OS X (10.3.7) with the handy “SquidMan” port for OS X / Darwin and it works great. The interface does allow me to make the httpd_accel_… modifications to the squid.conf file for transparent proxying, but how do I set-up the iptables step? My system uses ipfw instead and I have tried “sudo ipfw add 1000 fwd 127.0.0.1,8080 tcp from any to any 80″ only to see my port 80 malfunction. How can I configure the port 80 hijack/redirect function to get transparency working on OS X? Thanks in advance. Reply 27 Emre October 2, 2006 To not to see both empty lines and remarks grep can be used in this way; 314 grep -Ev “^$|^#” /etc/squid/squid.conf Reply 28 Praveen October 29, 2006 Hi, Is it possible to retain public Ip address, while using squid, All pc in my lan having public ip address. I want to use squid. But whenever i use transparent squid, the outgoing packet keeps squid server’s ip as source ip address. how can i use squid httpd_accel without proxy. Reply 29 nixcraft October 29, 2006 The whole point of using transparent proxy/NAT is to hide internal IP address. As long as you have squid in between internet and other boxes anyone will see your squid ip address Reply 30 karthick November 11, 2006 dear, cyberciti guys,thank you very very mush.because your web site is good food for linux hungry peoples. Contineue yours job with god’s blassings. By, Your’s S.Karthick Reply 31 Marlon November 15, 2006 Hi guys, I ask something about my firewall-squid-dhcp server in one box, i have eth0 for internet-connection and eth1 for local-connection…i want to do is, to be transparent proxy all clients connected at eth1 local-connection. Could you provide me the minimal config of iptables/squid.conf to make work as a transparent proxy my all-in-one linux box. i want the minimal config of iptables without filtering temporary. 315 Thanks! Reply 32 nixcraft November 15, 2006 Squid config remains the same. Only iptables will changes. Type following at command prompt to get started temporary: iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to 192.168.1.1:3128 iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j REDIRECT --to-port 3128 Replace 192.168.1.1 with your actual Linux server IP address (local LAN IP) Reply 33 Jaimohan November 17, 2006 Dear friends, can i run the VPN-Checkpoint software with squid using transparent proxying, please reply asap Regrds Jai Reply 34 nixcraft November 17, 2006 Yes you can as long as everything is configured you should able to use VPN with any other internet service Reply 35 Mimbari November 24, 2006 For a “completely totally” transparent proxy, use http://www.balabit.com/downloads/tproxy/linux-2.6/ That way the client IP address will be used by the Squid, still caching etc too. Needs inbound routing of reply server traffic to be routed back through the Squid box though. It’s kernel & iptables patching only, yielding the tproxy iptables table.. In Valen’s Name. Reply 316 36 neddy November 27, 2006 Hi there, i have a few questions… 1) will this proxy things such as steam games / downloads, Microsoft updates, anti-virus updates and other things that do not run on port 80? 2) The proxy appears to work, and i have set my ip address to it, but if i download a 10mb file, then download the same file on another pc, the speeds are still slow, indicating that the proxy may not be working… when i run: “tail -f /var/log/squid/access.log” i get the log to screen & file, and it is showing that there is data being proxied, but everything still runs ’slow’ 3) I am running it on public ip addresses, one for the eth0 (internet) 203.16.209.x and the second ip address for the people using the proxy is eth1 (lan) 203.221.91.x the proxy all works, but could this be why it is running slow? - cheers Reply 37 nixcraft November 27, 2006 Neddy , Yes everything should work as long as remote site is using port 80 for downloading updates and patches. If you need to cache larger file you need to enable cache object size. Default is 4 MB. However it is not recommended to use such large cache object size until and unless you have monster cache server (normally ISP enables large cache object). You need to tune out your squid for this. The defaults are good to improve overall user experience. Proxy should work fast. Make sure you have correct DNS server setup. Try to use OpenDNS server http://opendns.com/ HTH. Reply 38 woodsturtle November 29, 2006 I am having trouble accessing an MS sharepoint server through squid 2.6 configured in transparent proxy mode. Everything that I have read so far suggest that I must bypass squid althogether because of the NTLM 317 authentication require to access share point. Is this the case? Also, what is the iptables statement which I should use before the DNAT statement? I am using wccp and have created a GRE tunnel on the squid box. Reply 39 Hernan November 29, 2006 Excelent guide, It work forme. Thanks. Now I{m working on acl that let a few machines acces msn. Reply 40 woodsturtle November 29, 2006 What guide are you referring to? Reply 41 ReMSiS December 12, 2006 Hello, Really the guide is wonderful and it worked 100% for me and even the clients using it are amazed with its speed. But there is one problem now !!! How can we access mail, i.e: Clients using outlook are not enabled to send and recieve mail because the ports is blocked or it is not able to make resolution to the mail server. How can I make the mail work too ? because now only http is working pop3 and smtp is not !!! how can I do that ? Regards, Reply 42 nixcraft December 12, 2006 I think your topic is already answered @ our forum. Reply 43 ReMSiS December 13, 2006 Yes nixcraft answered but still not working right, the script yesterday worked now its not !!! I maybe going crazy… Reply 44 sohan January 2, 2007 I have installed Squid-2.4 on Red Hat Linux enterprise 4 2 Public IPs are available from 2 different ISPs. 318 Now I want to configure Squid so as to apportion traffic among the IPs by destination (external) IP and by source (internal) IP. The aim is to give complete bandwidth available from one ISP to one set of users for thier access to specific URLs. Is there any way to do the same in Squid ? Reply 45 sohan January 2, 2007 Hi All I want to put quota limit on Squid for users. I want to limit users for specific data limit like If i want to allow users to consume on 4 GB Data through Squid then what i need to do. Is there any additional tool for squid to do this or squid can do this also ? If anybody have solution for this please let me know. thanks Reply 46 Raghuram January 31, 2007 Hi, Nice tut. Just what I wanted for an education facility of 45 machines. Have a 2Mbps ADSL connection which I want to share across the LAN. This is my first time with squid. One doubt – my lan ip (eth1) is DHCP driven while eth0 (internet facing) has a static IP. In this case, will squid work? thanks. Reply 47 raghu January 31, 2007 will squid work with DHCP aasigned eth0 and static Ip eth1? Nie tuttorial.thanks Reply 48 nixcraft February 1, 2007 raghu, You can use Squid with DHCP assigned IP Reply 319 49 Marco A. Barragan February 7, 2007 All this not work for 2.6, in the case of using: http_port x.x.x.x:xx vhost transparent or any combination, the message is “Can’t use transparent and cache in the same port”, if you try to use the cache_peer command, appear an error FATAL: Bundle in line x: cache_peer … So, now you can’t use the server for caching and proxy at the same time :S Reply 50 nixcraft February 7, 2007 #1: You cannot set proxy and transparent http on same port. @2: There is some discussion going on about cache peering @ our forum. HTH Reply 51 Clay February 8, 2007 I’m trying to setup squid transparently on a box that has one network interface, but is plugged into a hub between the Internet connection and the switch that the clients are on. (I realize this is not ideal, but it’s what I have to work with.) Can anyone point me in the right direction? Reply 52 rakesh February 9, 2007 sir well i have one problem, i am one system with two ether lan card one connected to Public ip and another with local network. what i want is if any exterbal client send an request on port 80, that request should be redirect to my local DNS. how can it be possible. another thing i have two domain mydomain.com (local) and another http://www.com (internet). now if any client request to http://www.com it request should be redirect to mydomain.com. can it be possible, if possible plz send me the solution Reply 53 raghu February 11, 2007 320 Hi vivek, Can squid be set up on a machine different from the internet gateway machine? I have a DHCP (FC5) server on which I want to set up squid. My internet gateway (ADSL) machine runs Windows Xp and I don’t want to disturb it. Thanks. Reply 54 Marco A. Barragan February 17, 2007 But how i can configure it? any idea? how to activate the cache for my network? any can help me to make the right stuff? I’m redirecting the port 80 to 3128 with iptables (old style squid) and using this: http_port 10.42.0.1:3128 transparent half_closed_clients on visible_hostname 201.234.228.139 coredump_dir /var/spool/squid Where 10.42.0.1 is the network interface (eth0) conected to lan, and eth1 is the Wan lan. I want make the cahce for my users with squid, and also using proxy, but i can’t go to every client to configure proxy setting, need transparent, and cache, i try all, i use this: http_port 10.42.0.1:3128 transparent cache_peer 127.0.0.1 parent 3128 3130 originserver half_closed_clients on visible_hostname 201.234.228.139 coredump_dir /var/spool/squid Not work, use all “arrows” that i imagine and noting, can any explain me how to do it? Really thanks a lot for any help. Reply 55 Siva February 19, 2007 how to control my bandwidth using squid proxy Reply 56 Marco A. Barragan February 21, 2007 321 for bandwidth you can use this: first step configure how many delay pools you going to use, for example if you have 2 types of users (one with big badwidth and others with low bandwidth) you need put this: delay_pools n, in our exaple: delay_pools 2 then you need define the class of bandwidth, there are 3 types, 1, 2, 3, in our example we use the class 1 and 2, for unlimited general and the restricted: delay_class 1 1 delay_class 2 2 then use the parameter to define the velocity, remember, if you want 128 kbps, you need multiply it for 128 to convert to bps: delay_parameters 1 -1/-1 delay_parameters 2 -1/-1 16384/57600 -1 means unlimited second is for 128 and boost of 450 last step is defining the acl, in my case: acl localhost src 127.0.0.1/255.255.255.255 acl clientes src 10.42.100.0/255.255.255.0 acl limitados src 10.42.99.0/255.255.255.0 delay_access 1 allow clientes localhost !limitados delay_access 2 allow limitados delay_access 1 deny all delay_access 2 deny all Dunno if is correct but is an example, you can investigate more. Reply 57 bitou February 26, 2007 This fw.proxy is to be started every time the computer is started, manually. Then only transparent proxy will work.Is there a method to do it automatically , so that the script is executed on start up even without the need of the user to log in. Regards Reply 58 nixcraft February 26, 2007 322 bitou, If you are using RedHat/CentOS/FC Linux type: service iptables save chkconfig iptables on If you are using Debian/Ubuntu Linux read this Reply 59 Coders2020 March 7, 2007 In the past I had serious problems with configuring squid on my local network. I am alrady under university firewall/proxy. Can I configure proxy under proxy(I know it has no pracktical use but just asking for testing purpose) ? Reply 60 Prabir Das March 19, 2007 its good education packeg to us Reply 61 Prashant Soni March 20, 2007 Hi, My name is Prashant. I am Sr.Network Engineer in an ISP. I would like to put a transparent proxy with bridge between our local networks and Internet. I’d tryinn to configure squid transparent proxy with bridge couple of times, but yet not successful. I am explaining the scenario and hope somebody will help me. SCENARIO : We have 2 ip pools in our networks. 1. 128.0.0.0/18 (fake ip) 2. 59.x.x.96/27 (real ip) 3. 59.x.x.0/27 (Real IP Used in internetwork) We have one mikrotik master router from which both network goes to the radware(which is load balancer and using internetwork ip listed in a cisco). Now I want to put squid between mikrotik and radware (loadbalancer) 323 In my network nobody uses authentications so not needed. When, I configured the squid with trasparent proxy in bridge mod, sometimes it gives me acl errors. But when I changed in squid.conf “access_allow all” , no error comes but page is not loading till done. With this settings I can ping , traceroute to the internet from client addresses also but page is not loading. I’ve done all configuration as stated in below link : http://freshmeat.net/articles/view/1433/ Please guide me regarding this matter. Regards, Prashant Reply 62 Nandkishor March 27, 2007 Hi, I have configured the DHCP server using ES Linux-4 .It having 2 ethernet cards. eth0 is used dhcp (Lan) & eth1 is connceted to Internet. eth0 using IP 192.x.x.x Netmask 255.255.255.0 Gateway 59.x.x.x (this is IP of eth1) eth1 using Ip 59.x.x.x Netmask 255.255.255.240 Gateway 59.x.x.129 Client M/c’s ping to IP of eth0, also ping to gateway of eth0 & ip of eth1. But not able to ping Gateway of eth1-59.x.x.129 so they are not able to connect to the internet. So plz give me the solution for this. Reply 63 Nandkishor March 30, 2007 Hi, I have configured the transperant proxy with dhcp server. How I block the files for downloading like *.dll & *.mp3 &*.mp4 etc. for a specific time. Reply 64 nixcraft March 30, 2007 Nandkishor, 324 Please see this article Reply 65 xaviero March 30, 2007 how about if i use another PC for router & gateway, then use another PC (SLES installed) just for transparent proxy (DMZ). the proxy already worked, but its not transparent. what should i do with the iptable ? advice plz Reply 66 Nandkishor April 3, 2007 Hi, I have configured the many virtual hosts at one server and added same big file in that all virtual hosts. But because of this big file more size is required. So it is posible to me create one folder on that server, put that file & give the path of that folder in the all virtual hosts. But How it is possible? Plz give me the solution for this. Reply 67 Nandkishor April 3, 2007 Hi, I have see the article for blocking of the .dll, .mp3 ,mp4, .exe & many files downloades, & do the configuration. But this is not working to block the files downloading. Plz give me the solution for this. Reply 68 Gurpinder Singh April 7, 2007 hello everybody i want to configure a squid server on fedora core 5. i want to that range of ip address is 192.168.1.1 – 192.168.1.60, and 192.168.1.101192.168.1.160 . internet is running on this client machines. not running internet on others ip address i.e 192.168.1.61 – 192.168.1.100. please urgent reply me on my mail address. Gurpinder Singh 325 Reply 69 Alex Ling April 10, 2007 Hi all i would like to know how to forward HTTP request to others proxy (like privoxy). Thanks. Reply 70 mark April 26, 2007 Good day. I’m currently running squid 2.5 on my centOS server… I needed authentication for my users before accessing the internet (80, 21, 443, etc) so I configured it correspondingly. However, one of my clients needs to access an ftp server which enforces a username and password authentication. Squid tries to connect using an anonymous user rather than prompting for a password… My question being: How could I enable user authentication to public ftp servers if my machine is behind a squid proxy server? I’d appreciate your best effort. Thanks in advance. Reply 71 pankaj chauhan April 28, 2007 hello every body, i have a squid proxy server my server ip is 192.168.0.1 my client ip is 192.168.0.2 to 192.168.0.240 internet is working proper on client can it possible that first 30 client (192.168.0.2-192.168.0.30) get more bandwith than rest client plz told me wat change will do on squid.conf file for it. Reply 72 Tapan May 3, 2007 how to prevent bypassing sarg and dansguardian Reply 73 tushar May 9, 2007 326 Hi All My name is tushar and i want to make proejct on squid proxy server, because I want to submit the complet project on squid proxy server. Thanks. Tushar Raut Reply 74 Frank May 10, 2007 Is there any indication to use some sort of virus/malware filter in this setup, aka, HAVP – HTTP. http://www.server-side.de/ Cheers! Frank Reply 75 chandrakant May 24, 2007 Hi Thanks for the fw.proxy file. after enableing this file i’m able to run my system as router and proxy server. But after restart server I’m reciveing so many logs messages. Please have look and tel me how can block them. Due to this my server responding slovely… System log:May 24 12:45:06 pune dbus: Can’t send to audit system: USER_AVC pid=2658 uid=81 loginuid=-1 message=avc: denied { send_msg } for scontext=root:system_r:unconfined_t tcontext=user_u:system_r:initrc_t tclass=dbus May 24 11:28:21 pune kernel: IN=eth0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:14:85:64:d7:3e:08:00 SRC=10.20.204.70 DST=10.20.255.255 LEN=78 TOS=0×00 PREC=0×00 TTL=128 ID=29613 PROTO=UDP SPT=137 DPT=137 LEN=58 May 24 11:28:22 pune kernel: IN=eth0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:14:85:64:d7:3e:08:00 SRC=10.20.204.70 DST=10.20.255.255 LEN=78 TOS=0×00 PREC=0×00 TTL=128 ID=29615 PROTO=UDP SPT=137 DPT=137 LEN=58 May 24 11:28:23 pune kernel: IN=eth0 OUT= MAC=ff:ff:ff:ff:ff:ff:00:14:85:64:d7:3e:08:00 SRC=10.20.204.70 327 DST=10.20.255.255 LEN=78 TOS=0×00 PREC=0×00 TTL=128 ID=29616 PROTO=UDP SPT=137 DPT=137 LEN=58 Regards, Chandrakant Reply 76 csbot May 24, 2007 chandrakant, Remove last line: iptables -A INPUT -j LOG BTW, log will not slow down your server. Reply 77 cedric May 27, 2007 your instructions work good but i can’t connect to my network printer and another server on my lan. also having problem setting up static ip for eth0. i followed the instruction from the link you gave. i tried to do it several times and always had to go back to using dhcp. i need some help and what gateway would i use for eth0? Reply 78 Chandrakant May 31, 2007 Hi, One more problem i am facing with above configuration. I am not able to use web access of exchange 2003 server. and office scan http url can any buddy help me resolve this. Chandrakant Reply 79 bhupesh karankar June 1, 2007 Hello Friend, i am bhupesh karankar, i have problem in squid. as above, i have implement squid in my server. but still my client not able to access mail via outlook with squid. wating for ur reply 328 i have same configuration as above. wating for ur reply, need help Bhupesh Karankar [email protected] 0998110488 Reply 80 Brent June 1, 2007 Thanks for posting the transparent proxy script. It works very well. I like the way you choose to close everything and only open what you need. I do need to open a few ports, like https (443) and possibly one or two more (ssh). Can you post how you would do this? Thanks. Reply 81 vivek June 1, 2007 Find line # DROP everything and Log it Add your iptables rules before that line. Remember you must deal with eth0 and eth1, otherwise you will create a new security issue. Reply 82 bhupesh karankar June 2, 2007 hello, this is nice script. but when i use this, it blocked smb and squid and my web server, what to do. wating for reply [email protected] bhupesh karankar Reply 83 vivek June 2, 2007 bhupesh, Open those port using iptables rules as this script locks down eveything. read my comment # 82. If you have more questions please post to our forum. 329 Reply 84 Maroon Ibrahim June 11, 2007 Prashant!!! allow access for ICP Regards Reply 85 Nandkishor June 11, 2007 Hi, I configured the transperant proxy & also set the IPtables. This is working fine. But recentaly I trust by a trouble. If I try to open any site like gmail.com or any other sites. Some time that are works but some time they give follwing error. The requested URL could not be retrieved While trying to retrieve the URL: http://gmail.com/ The following error was encountered: Unable to determine IP address from host name for gmail.com The dnsserver returned: Refused: The name server refuses to perform the specified operation. This means that: The cache was not able to resolve the hostname presented in the URL. Check if the address is correct. Your cache administrator is root. Pleas give me the solution for this. Regards, Nandkishor Reply 86 Linuxnewbie June 11, 2007 Hi, I need to install transparent proxy with squid caching, but my eth0 is connected using DHCP, so what all changes need to be done ? Thank you for publishing your experiences and configurations… 330 Regards Reply 87 vivek June 11, 2007 Hi Linuxnewbie, Make sure eth0 always get same IP using eth0, if not possible modify a script to obtain IP address using following statement: ifconfig eth0 | grep 'inet addr:' | cut -d':' -f2 | awk '{ print $1}' Set SQUID_SERVER as follows: SQUID_SERVER=$(ifconfig eth0 | grep 'inet addr:' | cut d':' -f2 | awk '{ print $1}') NOTE: you only need to use above, if SQUID_SERVER ip is dynamic; otherwise it should work out of box. HTH Reply 88 linxnewbie June 12, 2007 Thanks for the reply…so no need to make any changes in the IPTABLES, right ? Reply 89 chandar June 25, 2007 Hi Vivek, I configured squid/2.6.STABLE12 with the help of your script file. below is my N/W scenario client–> Squid + Router –> pix–> Router–> Internet. In this case everything is working very fine. For few minutes. After sometime client not able to ping gateway that is my squid server. But client able to ping next hope ip I’s Pix ip or router ip. This problem is resolved when I restart network service of Linux machine. and it’s happened every time. Please find below linux machine iptables snap. # squid server IP SQUID_SERVER=”10.30.200.1″ # Interface connected to Internet INTERNET=”eth0″ 331 # Interface connected to LAN LAN_IN=”eth1″ # Squid port SQUID_PORT=”3128″ # DO NOT MODIFY BELOW # Clean old firewall iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X # Load IPTABLES modules for NAT and IP conntrack support modprobe ip_conntrack modprobe ip_conntrack_ftp # For win xp ftp client modprobe ip_nat_ftp echo 1 > /proc/sys/net/ipv4/ip_forward # Setting default filter policy iptables -P INPUT DROP iptables -P OUTPUT ACCEPT # Unlimited access to loop back iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT # Allow UDP, DNS and Passive FTP iptables -A INPUT -i $INTERNET -m state –state ESTABLISHED,RELATED -j ACCEPT # set this system as a router for Rest of LAN iptables –table nat –append POSTROUTING –out-interface $INTERNET -j MASQUERADE iptables –append FORWARD –in-interface $LAN_IN -j ACCEPT # unlimited access to LAN iptables -A INPUT -i $LAN_IN -j ACCEPT iptables -A OUTPUT -o $LAN_IN -j ACCEPT # DNAT port 80 request comming from LAN systems to squid 3128 ($SQUID_PORT) aka transparent proxy iptables -t nat -A PREROUTING -i $LAN_IN -p tcp –dport 80 -j DNAT – to $SQUID_SERVER:$SQUID_PORT # if it is same system iptables -t nat -A PREROUTING -i $INTERNET -p tcp –dport 80 -j 332 REDIRECT –to-port $SQUID_PORT # DROP everything and Log it iptables -A INPUT -j LOG iptables -A INPUT -j DROP Reply 90 chandar June 25, 2007 Hi Vivek, I configured squid/2.6.STABLE12 with the help of your script file. below is my N/W scenario client–> Squid + Router –> pix–> Router–> Internet. In this case everything is working very fine. For few minutes. After sometime client not able to ping gateway that is my squid server. But client able to ping next hope ip I’s Pix ip or router ip. This problem is resolved when I restart network service of Linux machine. and it’s happened every time. Please help me to resolve this issue. Regards, Chandru Reply 91 shellyacs June 27, 2007 Need help. I have read the forum on transparent proxy. I have followed it to the letter. A cannot get it to work. I am using Suse linux 10.2. I can get to the internet from the workstations, but only if I setup the squid server as a proxy in IE. Any help would be greatly appreciated. Thanks Reply 92 Amrendra July 6, 2007 I have used above kind of firewall (IPTABLE), I don’t want to use transparent proxy because we need to use authentication, and if I am allowing forward and unlimited access to LAN then they are also able to bypass the proxy to use internet, So can anyone give me solution that, for accessing websites ( http/https) people must go through Proxy and its authentication, and rest for everything they should be allowed from the LAN rest everything includes (FTP , DNS ) respose. 333 Thanks Amrendra. Reply 93 forweb July 9, 2007 I had got some errors when I used the instructions above, 400 something like syntax of the request was wrong… The script above works great but this is what I have to add to get it to work on my ubuntu 7.04 squid.conf: http_port 80 http_port 192.168.1.9:3128 transparent (this is NIC connected to internet) acl jamal_net src 192.168.2.0/24 (this LAN Nic) http_access allow jamal_net http_access allow localhost Change your IP’s to comply with you above script. start your squid.conf start your fw-proxy add it to rc.local so it will boot at startup. Reply 94 oj July 16, 2007 Execellent write-up.Very helpful to me Reply 95 Slavko July 26, 2007 From SquidFaq For Squid-2.6 and Squid-3.0 you simply need to add the keyword transparent on the http_port that your proxy will receive the redirected requests on as the above directives are not necessary and in fact have been removed in those releases: http_port 3128 transparent Reply 96 eq1425 July 29, 2007 334 hi all, will this shel script work even if i install a redirector program(i.e squidguard)on squid?and how?? thanks Reply 97 John August 5, 2007 I work in a public library and we provide wireless access to our patrons. No configuration is required on their laptops because transparent proxying is in effect, via a rule in SUSE Firewall. I’m using SUSE 10.2, SQUID, Dansguardian, and the SUSE2 Firewall. Is it possible with my existing setup to also forward users to a custom home page that I have set up? This page will have our wireless policy, etc. on it. If so, how exactly would this be done? Thanks! Reply 98 ankush August 7, 2007 how configure best squid server on RHEL 5 i have create in RHEL 4 but i have problem about RHEL 5 Reply 99 Mani August 8, 2007 Hi, when i execute squid -z.the following error is appear. FATAL: Could not determine fully qualified hostname. Please set ‘visible_hostname’ Squid Cache (Version 2.6.STABLE13): Terminated abnormally. CPU Usage: 0.004 seconds = 0.004 user + 0.000 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Aborted but i configure visible_hostname myhostname in my squid.conf file.still the same error comming again.what can i do? 335 Reply 100 IRFAN August 13, 2007 any one have squid configaration than can use any where Reply 101 Mark Ng August 15, 2007 I have a box running public IP on eth0 and private IP on eth1. Everything seems to be working but my sites running apache can’t be accessed via their Public IP anymore. However I can still access them via eth1. Any help is appreciated. Reply 102 Abdul Latif August 17, 2007 Sir, is there any solution regarding linux Squid Proxy which responsible to handle two ADSL internet connection. combining bandwidth, Provide loadsharing, feed back if one connection goes down. Reply 103 Elliott August 20, 2007 Thanks for your excellent site. I have followed your guide and set this up successfully. I will recommend this guide to anyone setting up a squid server. Elliott Systems Administrator Reply 104 Chris August 26, 2007 What about setting this up using the latest version of Squid? Fedora 6 comes with squid but the parameters mentioned above are not there. They have been updated. Any help? Reply 105 Chris August 26, 2007 DUH, i see the post explaining it. Disregard my last post 336 Reply 106 vijay August 30, 2007 I like to know how to configure ftp and proxy for my internal use and external( internet) ftp with proxy. Please help Reply 107 king of the internet September 18, 2007 You said allowing port 443 out solves your problems, but in fact it creates more. Now users can simply use SSL-based web proxies to tunnel past your proxy. This means no logging, control, nothing. For example, try https://vtunnel.com/ Reply 108 vivek September 19, 2007 King, You cannot redirect port 443 with a transparent proxy and this the only solution. Other option is disable a transparent proxy and use port such as 3128. HTH Reply 109 Saji Alexander October 22, 2007 Hi, I had gone thru your notes. It is very good and interesting. I have 2 network cards in my squid proxy server on centos. I need all the users to access only certain sites during the office hours and after office hours they can access anysites as they wish. This should not be applicable for managers who can access anysite at anytime. This I made it but when I configured squid I had given the port 8080 instead of 3128 the default port. The end users if the remove the proxy (ip of squid server) then they can access any site during the office hours. How to disable this ???? Something to do with firewall. I tried but I failed. I am pasting it can you correct it. 337 $IPTABLES -t nat -A PREROUTING -i $INTIF -p tcp –dport 80 -j DNAT –to $SQUID_SERVER:$SQUID_PORT squid_server has two network card. One is having internal ip and the other external ip. I had give external ip for SQUID_SERVER. SQUID_PORT is 8080 Thanks and Regards, Saji Alexander. Reply 110 Wolfox October 25, 2007 Anyone knows how to get this instructions working on SuSe 9 Enterprise Edition…. It looks like some of the syntax doesn’t work. Because in my case I cannot get it to work. Please help, I’m a newbie that is very eager to learn about proxying. Please Help… Thanks in advance Reply 111 hanz October 25, 2007 I have read your instruction but I have the same question as Saji ALexander. I have been trying to figure this out but failed. Is it possible to force all browser on a server running transparent proxy to use its proxy service for its web traffic? The server has dual interface. Thanks hanz Reply 112 vivek October 25, 2007 @Saji, You have to define TIME based ACL for squid to put time based restrictions. @hanz, yup, this config force all http traffic via squid. Reply 338 113 harish November 24, 2007 Hi Dear, Thanks or very simple steps. Harish Reply 114 fmstereo November 28, 2007 I have configured the transparent proxy but not all users are able to use it. Most of them must have the proxy in their browsers, just a few are able to conect without having to configure. And is very slow with transparent proxy. Any sugestions? Reply 115 Babu Ram Dawadi December 12, 2007 thanks for ur three steps to create transparent proxy but i am not sure it works with squid 2.6 stables 13. because i tried ur step on this squid 2.6. may be this article suit to squid 2.5. :) hi fmstereo>>i think u have to enable one options on ur proxy which is previously off like the following httpd_accel_no_pmtu_disc off change it to httpd_accel_no_pmtu_disc on Reply 116 Atman December 12, 2007 Why not use only one utility to filter out comments and empty lines when going through squid.conf: grep -v ^# /etc/squid/squid.conf | grep -v ^$ or if you prefer sed: sed ‘/ *#/d; /^ *$/d’ < /etc/squid/squid.conf Reply 117 arun December 13, 2007 give me a step of linux centos proxy setting and iptables confige and many more service starting 339 Reply 118 Vijay Godiyal December 20, 2007 Hello Friends, Need help from you… I had configured my squid server, squid+dansguardian with Linux RHCL4 .. its working for a hrs abustaly fine but abt 1 hrs its getting slow and get stoped work .. i m not able to understand the problem. normail proxy is working fine… but when it get started with dansguardian then problenm comes…. can someone help me out on this i have squid version squid2.5.STABLE6-3.4E.11 and dansG is dansguardian-2.8.0.6-1.2.el4.rf following is the conf file … dansguardian…. ################################################# DansGuardian config file for version 2.8.0 # **NOTE** as of version 2.7.5 most of the list files are now in dansguardianf1.conf # Web Access Denied Reporting (does not affect logging) # # -1 = log, but do not block – Stealth mode # 0 = just say ‘Access Denied’ # 1 = report why but not what denied phrase # 2 = report fully # 3 = use HTML template file (accessdeniedaddress ignored) – recommended # reportinglevel = 3 # Language dir where languages are stored for internationalisation. # The HTML template within this dir is only used when reportinglevel # is set to 3. When used, DansGuardian will display the HTML file instead of # using the perl cgi script. This option is faster, cleaner # and easier to customise the access denied page. # The language file is used no matter what setting however. # languagedir = ‘/etc/dansguardian/languages’ 340 # language to use from languagedir. language = ‘ukenglish’ # Logging Settings # 0 = none 1 = just denied 2 = all text based 3 = all requests loglevel = 2 # Log Exception Hits # Log if an exception (user, ip, URL, phrase) is matched and so # the page gets let through. Can be useful for diagnosing # why a site gets through the filter. on | off logexceptionhits = on # Log File Format # 1 = DansGuardian format 2 = CSV-style format # 3 = Squid Log File Format 4 = Tab delimited logfileformat = 1 # Log file location # # Defines the log directory and filename. #loglocation = ‘/var/log/dansguardian/access.log’ # Network Settings # # the IP that DansGuardian listens on. If left blank DansGuardian will # listen on all IPs. That would include all NICs, loopback, modem, etc. # Normally you would have your firewall protecting this, but if you want # you can limit it to only 1 IP. Yes only one. filterip = # the port that DansGuardian listens to. filterport = 3128 # the ip of the proxy (default is the loopback – i.e. this server) proxyip = 172.16.24.12 # the port DansGuardian connects to proxy on proxyport = 8080 # accessdeniedaddress is the address of your web server to which the cgi # dansguardian reporting script was copied # Do NOT change from the default if you are not using the cgi. # accessdeniedaddress = ‘http://YOURSERVER.YOURDOMAIN/cgibin/dansguardian.pl’ 341 # Non standard delimiter (only used with accessdeniedaddress) # Default is enabled but to go back to the original standard mode dissable it. nonstandarddelimiter = on # Banned image replacement # Images that are banned due to domain/url/etc reasons including those # in the adverts blacklists can be replaced by an image. This will, # for example, hide images from advert sites and remove broken image # icons from banned domains. # 0 = off # 1 = on (default) usecustombannedimage = 1 filtergroupslist = ‘/etc/dansguardian/filtergroupslist’ # Authentication files location bannediplist = ‘/etc/dansguardian/bannediplist’ exceptioniplist = ‘/etc/dansguardian/exceptioniplist’ banneduserlist = ‘/etc/dansguardian/banneduserlist’ exceptionuserlist = ‘/etc/dansguardian/exceptionuserlist’ # Show weighted phrases found # If enabled then the phrases found that made up the total which excedes # the naughtyness limit will be logged and, if the reporting level is # high enough, reported. on | off showweightedfound = on # Weighted phrase mode # There are 3 possible modes of operation: # 0 = off = do not use the weighted phrase feature. # 1 = on, normal = normal weighted phrase operation. # 2 = on, singular = each weighted phrase found only counts once on a page. # weightedphrasemode = 2 # Positive result caching for text URLs # Caches good pages so they don’t need to be scanned again # 0 = off (recommended for ISPs with users with disimilar browsing) # 1000 = recommended for most users # 5000 = suggested max upper limit urlcachenumber = 5000 # # Age before they are stale and should be ignored in seconds 342 # 0 = never # 900 = recommended = 15 mins urlcacheage = 9000 # Smart and Raw phrase content filtering options # Smart is where the multiple spaces and HTML are removed before phrase filtering # Raw is where the raw HTML including meta tags are phrase filtered # CPU usage can be effectively halved by using setting 0 or 1 # 0 = raw only # 1 = smart only # 2 = both (default) phrasefiltermode = 2 # Lower casing options # When a document is scanned the uppercase letters are converted to lower case # in order to compare them with the phrases. However this can break Big5 and # other 16-bit texts. If needed preserve the case. As of version 2.7.0 accented # characters are supported. # 0 = force lower case (default) # 1 = do not change case preservecase = 0 # Hex decoding options # When a document is scanned it can optionally convert %XX to chars. # If you find documents are getting past the phrase filtering due to encoding # then enable. However this can break Big5 and other 16-bit texts. # 0 = disabled (default) # 1 = enabled hexdecodecontent = 0 # Force Quick Search rather than DFA search algorithm # The current DFA implementation is not totally 16-bit character compatible # but is used by default as it handles large phrase lists much faster. # If you wish to use a large number of 16-bit character phrases then # enable this option. # 0 = off (default) # 1 = on (Big5 compatible) 343 forcequicksearch = 0 # Reverse lookups for banned site and URLs. # If set to on, DansGuardian will look up the forward DNS for an IP URL # address and search for both in the banned site and URL lists. This would # prevent a user from simply entering the IP for a banned address. # It will reduce searching speed somewhat so unless you have a local caching # DNS server, leave it off and use the Blanket IP Block option in the # bannedsitelist file instead. reverseaddresslookups = off # Reverse lookups for banned and exception IP lists. # If set to on, DansGuardian will look up the forward DNS for the IP # of the connecting computer. This means you can put in hostnames in # the exceptioniplist and bannediplist. # It will reduce searching speed somewhat so unless you have a local DNS server, # leave it off. reverseclientiplookups = off # Build bannedsitelist and bannedurllist cache files. # This will compare the date stamp of the list file with the date stamp of # the cache file and will recreate as needed. # If a bsl or bul .processed file exists, then that will be used instead. # It will increase process start speed by 300%. On slow computers this will # be significant. Fast computers do not need this option. on | off createlistcachefiles = on # POST protection (web upload and forms) # does not block forms without any file upload, i.e. this is just for # blocking or limiting uploads # measured in kibibytes after MIME encoding and header bumph # use 0 for a complete block # use higher (e.g. 512 = 512Kbytes) for limiting # use -1 for no blocking #maxuploadsize = 512 #maxuploadsize = 0 maxuploadsize = -1 # Max content filter page size # Sometimes web servers label binary files as text which can be very # large which causes a huge drain on memory and cpu resources. 344 # To counter this, you can limit the size of the document to be # filtered and get it to just pass it straight through. # This setting also applies to content regular expression modification. # The size is in Kibibytes – eg 2048 = 2Mb # use 0 for no limit maxcontentfiltersize = 256 # Username identification methods (used in logging) # You can have as many methods as you want and not just one. The first one # will be used then if no username is found, the next will be used. # * proxyauth is for when basic proxy authentication is used (no good for # transparent proxying). # * ntlm is for when the proxy supports the MS NTLM authentication # protocol. (Only works with IE5.5 sp1 and later). **NOT IMPLEMENTED** # * ident is for when the others don’t work. It will contact the computer # that the connection came from and try to connect to an identd server # and query it for the user owner of the connection. usernameidmethodproxyauth = on usernameidmethodntlm = off # **NOT IMPLEMENTED** usernameidmethodident = off # Preemptive banning – this means that if you have proxy auth enabled and a user accesses # a site banned by URL for example they will be denied straight away without a request # for their user and pass. This has the effect of requiring the user to visit a clean # site first before it knows who they are and thus maybe an admin user. # This is how DansGuardian has always worked but in some situations it is less than # ideal. So you can optionally disable it. Default is on. # As a side effect disabling this makes AD image replacement work better as the mime # type is know. preemptivebanning = on # Misc settings # if on it adds an X-Forwarded-For: to the HTTP request # header. This may help solve some problem sites that need to know the 345 # source ip. on | off forwardedfor = off # if on it uses the X-Forwarded-For: to determine the client # IP. This is for when you have squid between the clients and DansGuardian. # Warning – headers are easily spoofed. on | off usexforwardedfor = off # if on it logs some debug info regarding fork()ing and accept()ing which # can usually be ignored. These are logged by syslog. It is safe to leave # it on or off logconnectionhandlingerrors = on # Fork pool options # sets the maximum number of processes to sporn to handle the incomming # connections. Max value usually 250 depending on OS. # On large sites you might want to try 180. maxchildren = 120 # sets the minimum number of processes to sporn to handle the incomming connections. # On large sites you might want to try 32. minchildren = 8 # sets the minimum number of processes to be kept ready to handle connections. # On large sites you might want to try 8. minsparechildren = 4 # sets the minimum number of processes to sporn when it runs out # On large sites you might want to try 10. preforkchildren = 6 # sets the maximum number of processes to have doing nothing. # When this many are spare it will cull some of them. # On large sites you might want to try 64. maxsparechildren = 32 # sets the maximum age of a child process before it croaks it. # This is the number of connections they handle before exiting. # On large sites you might want to try 10000. maxagechildren = 500 # Process options 346 # (Change these only if you really know what you are doing). # These options allow you to run multiple instances of DansGuardian on a single machine. # Remember to edit the log file path above also if that is your intention. # IPC filename # # Defines IPC server directory and filename used to communicate with the log process. ipcfilename = ‘/tmp/.dguardianipc’ # URL list IPC filename # # Defines URL list IPC server directory and filename used to communicate with the URL # cache process. urlipcfilename = ‘/tmp/.dguardianurlipc’ # PID filename # # Defines process id directory and filename. #pidfilename = ‘/var/run/dansguardian.pid’ # Disable daemoning # If enabled the process will not fork into the background. # It is not usually advantageous to do this. # on|off ( defaults to off ) nodaemon = off # Disable logging process # on|off ( defaults to off ) nologger = off # Daemon runas user and group # This is the user that DansGuardian runs as. Normally the user/group nobody. # Uncomment to use. Defaults to the user set at compile time. # daemonuser = ‘nobody’ # daemongroup = ‘nobody’ # Soft restart # When on this disables the forced killing off all processes in the process group. # This is not to be confused with the -g run time option – they are not 347 related. # on|off ( defaults to off ) softrestart = off Reply 119 Robert December 22, 2007 I am building a rather unique Proxy server I need to be able to forward requests by maching the destintaions to 3 lists: - blacklist -> Block, - freelist -> Forward to upstreem Proxy with Spesified username and password same for all, - DirrectAccesslist – Retreve directly, What ever is remaining is forward to the upstreem proxy which will request username and password for charging purposes. The AD and charging Side of this I will work out later, it is the routeing with creds by list lookup that I have no idea where to start.. Site info 300 computers, 1000 users, 40M internet link I have a Dual Xeon 1.6 with 2G ram SCSI HW Raid HDD Server for the task (retired Ms Server) Ideas? Thanks Reply 120 Sai Wunna Aung January 5, 2008 hello all friends, pls help me. now i created squid 2.6 server on windows server 2003. but our ISP is burnned some websites.e.g http://mail.yahoo.com, https://mail.google.com .so, i want to open that web site and other to squid’s redirect setting. i want to know http redirect setting of squid 2.6. best reguards, Sai Wunna Aung Network Technician Reply 121 Ali Bhai January 8, 2008 348 hey, nice work. I appreciate the way u spread your knowledge just alike a teacher spreads to new bie’s. Thx Again Reply 122 Ambot January 11, 2008 Hey guys, How do i able to open the ports in proxy? i have the problems on my network, in which i can’t able to view webcam and voice in the yahoo messenger… As what i know 5000-5010 used for voice both tcp and udp while 5100 for video as tcp… I put it in Safe_ports but it seems not working… And also i’m not able to upload files but good downloadings…. Reply 123 Sajid January 11, 2008 Hi, Please help me to solve this problem. i have four network cards in linux machine 3 NC for WAN 1 for local LAN my squid is sending all the internet traffic to only on one network card other two are free its is possible that squid bind three wan NC and combine the Internet. thanks Reply 124 Arulkumar January 19, 2008 how to manage users browsing time quotas by squid. Example: Set a limit of 1 hour per day for the user Reply 125 dennyhalim January 24, 2008 dual xeon with 8 gig ram? how many (hundreds?) users this monster serve??? i’m using old refurbished p3 with 384meg ram serving 50+ heavy downloaders users with no problem. 349 and, with ipcop, it only takes TWO clicks to activate transparent proxy from its web gui. off course, you learn nothing with ipcop. coz it’s simply usable and minimal learning curve. you’ll learn a lot from getting dirty on cli. :) Reply 126 Mangal January 31, 2008 How can we block PC using Mac addresses ? I tried by: – acl block arp 12:23:43:df:32:df but my squid does not know keyword arp for solving this i tried to rebuild it but i failed can u help me to rebuild ? Reply 127 vivek January 31, 2008 Mangal, See our Squid MAC Filtering FAQ Reply 128 Anas January 31, 2008 Dear all Need Help …. I have Squid 2.6 STABLE6 Actually when I add httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on acl Tiajri src 10.0.0.0/24 http_access allow localhost http_access allow Tijari and when I tried to Stop And Start Squid service it gaves me Faild to start Faild …. please help me 350 Reply 129 Pirkia.lt admin February 2, 2008 Simple script to save your users from badware: #!/bin/bash URL0=http://www.mvps.org/winhelp2002/hosts.txt URL1=http://everythingisnt.com/hosts SQUIDBADWARE=/etc/squid/badware_list BADWARESTATS=/etc/squid/badware_stats wget $URL0 -O /tmp/SQUIDBADWARE0 -o /dev/null wget $URL1 -O /tmp/SQUIDBADWARE1 -o /dev/null BADWARE0=`cat /tmp/SQUIDBADWARE0` echo "$BADWARE0" >> /tmp/SQUIDBADWARE1 cat /tmp/SQUIDBADWARE1 | grep 127.0.0.1 | sed 's/127.0.0.1 //g' > /tmp/SQUIDBADWARE2 cat /tmp/SQUIDBADWARE2 | grep -v localhost | cut -d "#" -f 1 > /tmp/SQUIDBADWARE3 rm $SQUIDBADWARE.backup mv $SQUIDBADWARE $SQUIDBADWARE.backup cp /tmp/SQUIDBADWARE3 $SQUIDBADWARE SUM=`wc -l $SQUIDBADWARE` DATE=`date +%Y-%m-%d` echo "$DATE $SUM" >> $BADWARESTATS rm /tmp/SQUIDBADWARE0 /tmp/SQUIDBADWARE1 /tmp/SQUIDBADWARE2 /tmp/SQUIDBADWARE3 /etc/init.d/squid reload > /dev/null To squid.conf add/update following lines: acl BADWARE_LIST_1 dstdomain url_regex -i "/etc/squid/badware_list" deny_info ERR_BADWARE_ACCESS_DENIED BADWARE_LIST_1 ….. http_access deny BADWARE_LIST_1 http_access deny !Safe_ports BADWARE_LIST_1 http_access deny CONNECT !SSL_ports Don’t forget add this script to your crontab 351 crontab –e 30 23 * * * /data/scripts/squidguard.sh Reply 130 Faisal February 5, 2008 Dear I am using CentOS Linux server here I don’t need to define proxy in squid.conf. kindly guide me how to use without ISP proxy. also i have 3 DSL modems connected in office and i need to configure all together if 1 is not working it switch to other automatically. your quick response will be higly appreciative. Best Regards. Faisal Reply 131 Santosh February 8, 2008 Hi, This site is good with good comments. can you help me. i am using the same config. Pls clear my 2 doubts. 1.after making proxy transparent. the sites which are blocked in squidblock.acl does not works from client pc. (again if we use a proxy server then only it works). 2. how to block a website (such as http://www.youtube.com) using iptables. regards, Santosh Reply 132 Santosh February 8, 2008 hello, pls reply ASAP. regards, santosh Reply 352 133 nandhakumar February 22, 2008 Hi all I configured squid proxy in our office but problem is outlook express not working please help me out.. regards nandha Reply 134 vaibhavraj June 29, 2010 Hi, Just put IP of outlook machine as a acl in squid.conf. It will work. Regards, Vaibhavraj Reply 135 Sulman March 5, 2008 Dear, i have 3 NIC in Squid Proxy, One connect with Lan and other 2 connect with 2 DSL modems. I want to combine more than 1 DSL link speed togetehr. Kindly Helo me regarding this what will be need to configure in Linux. Halp me ASAP Thanks Reply 136 Jit March 13, 2008 Hi, I’ve configured my Squid as par your guidence but am nt able to access any website from client nor I’m able to ping. though I’m able to open some of websites from their IP and even able to open control panel of my ADSL Router! I’ve no clue where things are wrong! :( I wud highly be grateful to you help me to fix this issue! here is the complete scenario of my network [LAN] —> e1 [ SQUID ] e0 —-> [ADSL] 353 192.168.2.0 [LAN] 192.168.2.1 [e1 of squid] 192.168.1.2 [e0 of squid] 192.168.1.1 [adsl router ip] waiting despreatly! Rock on Jit Reply 137 Yusuf March 15, 2008 I have configured SQUID PROXY with TRANSPARENT using this site help Thanks Reply 138 gautam April 8, 2008 I had gone throug your notes. It is very good and interesting. I have 2 network cards in my squid proxy server on RHEL5. I need all the users to access only certain sites during the office hours and after office hours they can access any sites as they wish. This should not be applicable for managers who can access any site at anytime. This I made it but when I configured squid I had given the port 8080 instead of 3128 the default port. The end users if the remove the proxy (ip of squid server) then they can access any site during the office hours. How to disable this ???? Something to do with firewall. I tried but I failed. I am pasting it can you correct it. $IPTABLES -t nat -A PREROUTING -i $INTIF -p tcp –dport 80 -j DNAT –to $SQUID_SERVER:$SQUID_PORT squid_server has two network card. One is having internal ip and the other external ip. I had give external ip for SQUID_SERVER. SQUID_PORT is 8080 Please help me.. It is very urgent. 354 Thanks and Regards, Reply 139 flex April 11, 2008 I have a clarkconnect linux box am not that good in linux but can configure when given the example. My network has layer three switch which does the routing for all Vlans. I have created a specia Vlan where all traffic fron the LAN Vlans is routed, coonected this node to CC box LAN interface. Also i have added the static routes on the CC box and all vlans can access the internet properly. But i want to use proxy. WHEN I START THE SQUID PROCESS it block all outgoing traffic and gives me the ip and port to configure as proxy on brower settings , that i do but still cannt connect. here is a file for my routes Adding extra LANs on Clark Connect #/etc/system/network file EXTRALANS=”10.0.2.0/24 10.0.3.0/24 10.0.4.0/24 10.0.5.0/24 10.0.6.0/24 10.0.7.0/24 10.0.8.0/24 10.0.9.0/24 10.0.10.0/24 10.0.11.0/24 10.0.12.0/24 10.0.13.0/24 10.0.14.0/24 10.0.15.0/24 10.0.16.0/24 10.0.17.0/24 10.0.18.0/24 10.0.19.0/24 10.0.20.0/24 10.0.21.0/24 10.0.22.0/24 10.0.23.0/24 10.0.24.0/24 10.0.25.0/24 10.0.26.0/24 10.0.27.0/24 10.0.28.0/24 10.0.29.0/24 10.0.30.0/24 10.0.31.0/24 10.0.32.0/24 10.0.33.0/24 10.0.34.0/24 10.0.35.0/24 10.0.36.0/24 10.0.37.0/24 10.0.38.0/24 10.0.39.0/24″ #Adding Static routes to Clark Connect for Vlans to work with proxy #This should work #/etc/sysconfig/network-scripts/route-eth1 10.0.2.0/24 via 10.2.56.2 10.0.3.0/24 via 10.2.56.2 10.0.4.0/24 via 10.2.56.2 10.0.5.0/24 via 10.2.56.2 10.0.6.0/24 via 10.2.56.2 10.0.7.0/24 via 10.2.56.2 10.0.8.0/24 via 10.2.56.2 10.0.9.0/24 via 10.2.56.2 10.0.10.0/24 via 10.2.56.2 10.0.11.0/24 via 10.2.56.2 355 10.0.12.0/24 via 10.2.56.2 10.0.13.0/24 via 10.2.56.2 10.0.14.0/24 via 10.2.56.2 10.0.15.0/24 via 10.2.56.2 10.0.16.0/24 via 10.2.56.2 10.0.17.0/24 via 10.2.56.2 10.0.18.0/24 via 10.2.56.2 10.0.19.0/24 via 10.2.56.2 10.0.20.0/24 via 10.2.56.2 10.0.21.0/24 via 10.2.56.2 10.0.22.0/24 via 10.2.56.2 10.0.23.0/24 via 10.2.56.2 10.0.24.0/24 via 10.2.56.2 10.0.25.0/24 via 10.2.56.2 10.0.26.0/24 via 10.2.56.2 10.0.27.0/24 via 10.2.56.2 10.0.28.0/24 via 10.2.56.2 10.0.29.0/24 via 10.2.56.2 10.0.30.0/24 via 10.2.56.2 10.0.31.0/24 via 10.2.56.2 10.0.32.0/24 via 10.2.56.2 10.0.33.0/24 via 10.2.56.2 10.0.34.0/24 via 10.2.56.2 10.0.35.0/24 via 10.2.56.2 10.0.36.0/24 via 10.2.56.2 10.0.37.0/24 via 10.2.56.2 10.0.38.0/24 via 10.2.56.2 10.0.39.0/24 via 10.2.56.2 which other file should i configure for web proxy to work IP and port CC is giving for proxy is 10.2.56.2 8080 or 3128 but does not work Reply 140 Sohbet April 27, 2008 hey, nice work. I appreciate the way u spread your knowledge just alike a teacher spreads to new bie’s. Thx Again 356 Reply 141 Ye khaung May 8, 2008 I just test smooth wall express with in built squid. Not only in that squid but all, i can’t find where to put web server chaining i.e forward request to upstream proxy(isp’s proxy). Can any one explain me about following case. My server have 2 NIC card. Eth0 : 10.254.8.1.1 (internet) Eth1 : 192.168.0.1 (Lan) Subnet: 255.255.252.0 D.G : 10.254.8.1 My isp give their proxy ip and port. 203.81.71.148:9090 They prevent direct access. In that case i want a proxy server in my own. I want my clients computers to use proxy of mine but not ISP. (i want them to put my server Eth1 no as a proxy ip and port 9090 in ther IE and fire fox) Can any one give me a sample scripts? Please help me out. Our country is not very familiar with linux. S.O.S Ye Khaung Burma Reply 142 Peyman June 8, 2008 Excellent! Simply it worked. But after running the iptables shell script I could not reach my server via SSH or VNC. I had to comment these 4 lines of the script to get my remote access back. iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -j LOG iptables -A INPUT -j DROP Is it no problem commenting those lines? my squid is working as I want ;) 357 Reply 143 Padani June 28, 2008 When i gave the above config to the squid on a VPS (Debain).The following errors came. I didn’t implement that iptable rules root@x:/etc/squid# /etc/init.d/squid restart Restarting Squid HTTP proxy: squid2008/06/28 11:02:10| parseConfigFile: unrecognized: 2008/06/28 11:02:10| parseConfigFile: line 44 unrecognized: ‘httpd_accel_host virtual’ 2008/06/28 11:02:10| parseConfigFile: line 45 unrecognized: ‘httpd_accel_port 80′ 2008/06/28 11:02:10| parseConfigFile: line 46 unrecognized: ‘httpd_accel_with_proxy on’ 2008/06/28 11:02:10| parseConfigFile: line 47 unrecognized: ‘httpd_accel_uses_host_header on’ 2008/06/28 11:02:10| WARNING cache_mem is larger than total disk cache space! FATAL: No port defined Squid Cache (Version 2.6.STABLE5): Terminated abnormally. CPU Usage: 0.005 seconds = 0.000 user + 0.005 sys Maximum Resident Size: 0 KB Page faults with physical i/o: 0 /etc/init.d/squid: line 74: 30103 Aborted start-stop-daemon –quiet –start – pidfile $PIDFILE –chuid $CHUID –exec $DAEMON — $SQUID_ARGS </dev/null Reply 144 ramesh July 25, 2008 Hi, I have a problem I configured Transparent proxy it is working fine. problem with web server wheni tried to access the web page from external network. Error message : ERROR The requested URL could not be retrieved Access Denied. 358 Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect Reply 145 nazrin July 29, 2008 dear guys, is there anyway of doing proxy on port 25 and 110. i wanted to test it with spamassassin checking on that port using transparent proxy. thanks, nazrin. Reply 146 Khalid August 2, 2008 I am running FC6, 2.6.STABLE13 and I need help 2 network cards: eth0 on a local LAN address 10.6.9.171 eth1 190.2.168.0.0/24 my server is running DHCP and assigning addresses to local clients But Squid is giving me a headache I did follow the stpes in this tutorial, and my Squid FAILS to start everytime Firt it gave me this error ACL name ‘Safe_ports’ not defined! FATAL: Bungled squid.conf line 19: http_access deny !Safe_ports Squid Cache (Version 2.6.STABLE13): Terminated abnormally. Then when I defiene Safe_ports by adding definitions that I got from another website is does not like the added lines and it asks for a hostname 2008/08/01 16:08:53| parseConfigFile: line 36 unrecognized: ‘http_accel_host virtual’ 2008/08/01 16:08:53| parseConfigFile: line 37 unrecognized: ‘http_accel_port 80′ 2008/08/01 16:08:53| parseConfigFile: line 38 unrecognized: ‘http_accel_with_proxy on’ 2008/08/01 16:08:53| parseConfigFile: line 39 unrecognized: ‘http_accel_uses_host_header on’ 359 FATAL: Could not determine fully qualified hostname. Please set ‘visible_hostname’ Can someone please direct me on what I’m missing here ======================= here is my config file: hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? no_cache deny QUERY hosts_file /etc/hosts refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl purge method PURGE acl CONNECT method CONNECT cache_mem 1024 MB http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports acl lan src 10.6.9.177 192.168.0.0/24 http_access allow localhost http_access allow lan http_access deny all http_reply_access allow all icp_access allow all visible_hostname proxytest httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on coredump_dir /var/spool/squid ================================ 360 – Khalid Reply 147 Jakykong August 7, 2008 I thought I would mention that newer Squid versions (or maybe it’s older ones… I use 2.7) don’t accept the httpd_accel_* entries. Another way to do the same thing, which seems to work the same way, is to use the http_port entry. When you set the port (3128 by default), you can add “transparent” to the end of the line to make the proxy transparent. Reply 148 shantanu August 7, 2008 hiii, i know very less abt squid and linux, m in a college and my isp has blocked many of the sites and downloads , i need to unblock those sites as want to see my favourite football matches, so plz will anyone guide me how to unblock these sites and see streaming videos, my isp uses squid/2.6.STABLE6, plz reply…………….. Reply 149 shantanu August 12, 2008 if any one knows plz tell me e mail id is [email protected] !!! Reply 150 Baku August 27, 2008 Excellent article. The firewall script works fine in my GNU/Linux Debian Etch. However, the squid.conf should be update to squid 2.6 a later versions, which have the specific ‘transparent’ parameter. In addition, should be convenient add a fourth step: configure named daemon on squid host. Best regards Baku Reply 151 we3cares September 2, 2008 361 Very Good Work… :) But, I can tell a small easier step instead of grep -v “^#” /etc/squid/squid.conf | sed -e ‘/^$/d’ Use: # grep -v “^#” /etc/squid/squid.conf | cat -s Reply 152 Umer August 5, 2010 Gud .. Its working now Reply 153 MikeC September 25, 2008 Good write up…question though. After setting everything up I get the following error when I try to access a site: While trying to retrieve the URL: / The following error was encountered: * Invalid URL Some aspect of the requested URL is incorrect. Possible problems: * Missing or incorrect access protocol (should be `http://” or similar) * Missing hostname * Illegal double-escape in the URL-Path * Illegal character in hostname; underscores are not allowed Any ideas would be appreciated! Reply 154 Nandkishor September 26, 2008 Hi vivek, I have configured the transperant proxy & also Blocked the downloading of movies & songs. But some peoples are downloads by using the torrent or utorrent. Can u tell me how to blocked this torrent downloading by using squid or pear to pear? Reply 155 Rizwan Ahmed October 24, 2008 nice help Reply 362 156 cpyd October 26, 2008 this is funny. okay first of all, thanks vivek, thanks a ton for your fantabulous article. I setup two servers using your script and it works great. save one freak stuff.. while i see everyone running around saying they cant accept anything except port 80, my problem is exact opposite! ie.. it seems my firewall is allowing every damn traffic through itself, and no, i dint change a thing in the script except, ofcourse the variables in beginning. the iptables -L command gives this :Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED LOG all -- anywhere anywhere LOG level debug prefix `LOG_DROP ' ACCEPT all -- anywhere anywhere LOG all -- anywhere anywhere LOG level warning DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere i commented out the unlimited LAN access line, and i was completely blocked out, including the webserver running on the same machine. Anyone out there who can point me in the right direction?? I want to allow only ports 25, 465, 110, 995, 443 and 80 through my proxy server.. thanks :) Reply 157 jayarm December 7, 2008 I want to allow two prot which used for VOIP (port 8661 10500) how can enable the same 363 Please tell me with the example , i am using redhat my ip is 172.21.100.10 (eth0) 192.168.103.10 (eth1) Reply 158 Nick December 14, 2008 Is it possible to set a machine with one ethernet adapter on the network as a transparent proxy? So my machine (“machine2″) on 10.0.0.2 becomes my default gateway (in the DHCP config), which in turn either transparently proxies or sends the packet on to the ‘real’ default gateway at 10.0.0.1. Machine2 would need to match incoming packets and if not destined for it, and not destined for port 80, forward them to the router. Incoming packets not destined for the machine2, but are destined for port 80, forward to the squid proxy. This would be neat, as it would simplify network layout, avoid having to have two subnets, and make bypassing the proxy a simple method of adding a static network config with a different default gateway. Reply 159 bashir December 26, 2008 Hi i m using squid 2.6 in Centos 5.1. But i found some errors: 1. arp 2. when i blocked the ip’s but even that allow please helpd bashir pakistan islamabad Reply 160 khzied December 28, 2008 Hi everybody, I have a problem with squid.. In my network internet, i would like to have connection in the same time like this: * some ip address connect to internet with authentification * some ip address connect to internet without authentification How can i do in squid configuration and iptables rules.. 364 Thanks :) Reply 161 khzied December 28, 2008 with ipcop, i use the type “unrestricted user” that access internet without authentification.. Other user without type “unrestricted user” should connect by authentification.. How can i do? Ps: I use squid 3.0 Thanks Reply 162 brijesh January 10, 2009 dear sir Sir i want to installation squitd proxy but not installedd please give the setup and how do you installed Reply 163 Ibru January 19, 2009 Hi, You have done an excellent work. How can I run fw.proxy script every time when my computer starts. Thanks Ibrhaim PP Reply 164 Bjornar January 28, 2009 Hi. When i load the script I get a error message: iptables: No chain/target/match by that name Someone know whats wrong? im a noob (A) Reply 165 needh January 29, 2009 365 I use your squid on ubuntu 7.04. It complains no httpd_accel, etc. If I remove those lines in squid.conf, that’s no proxy at all. Nothing in access.log. Reply 166 baxbixbux February 20, 2009 good … now i can setup squid Reply 167 col February 23, 2009 Hi – thanks for the really useful information. I have now setup my main PC as a transparent proxy so can log and see all the websites that my family lan has been to. Is there a way to also log all MSN chat messages using squid? (we have a policy of open internet access, with the responsibility of where they choose to go being on the child, with them knowing that occasion spot checks of the logs will be carried out). Reply 168 iniabasi February 25, 2009 i have gone through all the comments here and I have done everything – configuring the squid 2.7 stable 13 and iptables in ubuntu 8.10. my problem is that i only browse when i fix the proxy in the explorer, the transparency does not work. when i add this line of code, i have errors: httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on. I am really at a loss on what to do. This what my squid conf looks like acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl ECONOMICS src 10.0.0.0/24 # RFC1918 possible internal network http_access allow ECONOMICS acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync 366 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow ECONOMICS icp_access deny all http_port 80 http_port 3128 transparent hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY\s[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT visible_hostname EconnetServer 367 hosts_file /etc/hosts coredump_dir /var/spool/squid Please can someone help me. Thanks. Reply 169 manjunath February 25, 2009 Hi, I do have setup internet->router(cisco 2600)->firewall (506 E)->Cisco Switch (6500) no routing captability ->DHCP Server->Lan . Planning to have Squid transparent proxy. Plz help me how to setup I am new to Squid project. Manjunath Reply 170 Xavier February 27, 2009 Hi all, My Squid server works fantastically with the script above if I only have 2 network adapters enabled. I have an eth2 that I wish Apache to listen on as I was getting some oddities with it running on eth0 and eth1 which i am guessing is attributed to SQUID. I can configure Apache to listen on eth2 ok, the problem is as soon as I enable and start eth2 everything dies. eth0 and eth1 are unpingable and squid doesn’t work. All I am doing is an out of the box version of squid with a very basic conf and the script above. Any help? Thanks, Xavier. Reply 171 hana March 5, 2009 is it possible to implament transparent proxy using only one NIC? 368 Reply 172 kpm March 14, 2009 We are using two ip numbers for accessing internet and intranet. The IP 172.16.0.0/24 is for accessing our Intranet application from our remote office. The IP 192.168.1.0/24 is local broadband connection used for accessing internet locally. I want to access both the connection in a single IP by configuring linux squid proxy sever. Can u please help me out how to do the settings. Reply 173 Christofer March 17, 2009 Thanks cyberciti for the great tutorial, help me a lot. Reply 174 vijay March 29, 2009 This setup can use in fedora 10 Reply 175 Tricky April 15, 2009 I like how you’ve built this post. The httpd entries don’t seem to work on my server however its not a particularly important function for me. I think perhaps it wasn’t built into the build I have from Arch Linux. On a purely academic note, I often work with grep and sed and I recognised some even shorter ways to strip the squid.conf file. The shortest is still a combination: grep . /etc/squid/squid.conf|sed '/ *#/d' unless you want to actually strip it inline: sed -i '/ *#/d; /^ *$/d' /etc/squid/squid.conf Reply 176 Bruce Smith April 16, 2009 I’m looking for help for a fix. i work at a school. and im looking to run squid to speed up net access i have 2 up stream proxy’s we use 1 for kids 1 for staff, and i want to bind them in to 1 proxy in school with 2 ports. 369 so port 8080 for students caching from upstream proxy student.proxy port 80 so port 8099 for staff caching from upstream proxy staff.proxy port 80 any one any clues ? Reply 177 nichive April 26, 2009 to da point, I need some help with this configuration I’m running my squid on Ubuntu Server 8.10 with the transparent configuration applied, and the iptables script made, without any error on the start/restart part. but my problem is, I can’t open anything through any web-browser that is installed on my Local Area Network but if I try some ping command to any web-address, it works fine pitty, not doing so with the web-browser anyhelp would be appreciated :) Reply 178 nichive April 26, 2009 ignore my last question, I found out what my problem was.. my machine was a fresh installed one, didn’t have the masquerading method… just run the following command and voila $ sudo apt-get install ipmasq Reply 179 dave love May 7, 2009 I am using this setup but I am having trouble connecting to port 443. Any ideas? Do I need to tell it to use 443 and 80 in the squid.conf? Reply 180 Md. Saidur Hasan May 10, 2009 hi boss, it’s working but problem with the email. i can’s download my email in outlook. my configuration is as follows 370 # cat /etc/squid/squid.conf | sed ‘/ *#/d; /^ *$/d’ Output ——————– http_port 3128 transparent hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache broken_vary_encoding allow apache cache_mem 32 MB access_log /var/log/squid/access.log squid auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd auth_param basic children 30 auth_param basic realm Squid proxy-caching web server auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl CONNECT method CONNECT acl bad_sites dstdomain “/etc/squid/squid-block.acl” http_access deny bad_sites acl esl src 172.16.10.0/24 http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow esl http_access deny all http_reply_access allow all icp_access allow all cache_mgr [email protected] visible_hostname ESL-NNC coredump_dir /var/spool/squid 371 please help me.. Reply 181 chrkc May 25, 2009 Hi, I have three systems, my apache web server is running on 192.168.0.26 machine, squid/proxy is running on 192.168.0.25 and my firewall/shorewall is running on 192.168.0.20 And there is a local network 192.168.0.X of systems with gateway mentioned as 192.168.0.20. Can anyone tell me how do i manage in a way that all the http requests made are directed to the squid/proxy? As the people in the local network through the browser direct connection are able to open sites that were restricted through the proxy settings. Thanks Reply 182 Wiki June 8, 2009 Where can i find or where should i paste the following commands? in line number? httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy httpd_accel_uses_host_header on Reply 183 Nand June 17, 2009 I have setup the squid using transperant proxy & in iptables I have chnge the polixy of filter table to DROP. Everything is working fine. But any idea how to block the torrent downloading? what iptables rules are want to setup? Regards, Nandkishor Reply 184 Rashid Iqbal June 27, 2009 372 hi friends I am new to linux. right now i am using the fedora… I configure the proxy and configure the iptables to forward the traffic Microsoft Outlook . now there is a problem that users are able to browse withoutt the client proxy settings…… although I only add the iptables script that forward the port 80 traffic to port 3128 that users should go through proxy… secondly we are using the citrix server……… how to enable remote users to connect out db server through citrix server… using TCP 1494 and UDP is 1600 to 1699… and tcp is 80.. and how to restrict the wireless users that they should go thorugh proxy…. and finally I want that only some specific users to use the internet through client proxy settings and remaining will be blocked…. please help me in this regard……..I will be highly obliged.. Reply 185 Rashid Iqbal June 27, 2009 Friends I am new to squid I want to configure the proxy server with squid but not with the transparent…. like that every used should put the ipaddress+port 3128….. secondly I want to receive the emails on Microsoft Outlook… for this purpose I use the iptables now mail is working but user can bypass the proxy after putting the proxy address into the clients gateway.. please help me to solve this issue.. Reply 186 Anindya Banerjee July 6, 2009 How can I install and configure squid proxy in my red hat linux system. Reply 187 Mohd Anas July 14, 2009 Hi, Can someone suggest how can I configure my squid http proxy for FTP also. And what are the settings for ftp client like filezilla. Thanks 373 Reply 188 Gregory I Okumoro July 22, 2009 Hi, I am new to Linux but I like what you have to say about port 80 redirection to port 3128. Currently, my website is unavailable online because the Cable Company (ISP) has blocked all the ports that I have to work except port 3128. !. What is the directory of the firewalls to which I have to copy the “firewall” scripts? 2.What directory do I copy “fw.proxy” to? Thanks, Gregory Omkpokoro Reply 189 Ajit Upadhyay August 4, 2009 Hi! I have a server with eth0 (10.126.2.101) connected to my ISP (proxy 10.31.31.10:3128 with authentication ie. userid/pwd) and eth1 (192.168.1.1) connected to local network through a fast ethernet switch. The server is also a DHCP sever for local network (192.168.1.2 – 192.168.1.254). Now, I have configured squid on this server so that local netwrok PCs can access internet thorugh my server (which is behind ISP’s authenticated proxy). The detail of squid.conf is listed below: ——————– acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 10.0.0.0/8 acl localnet src 172.16.0.0/12 acl localnet src 192.168.1.1 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 374 acl Saf_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl purge method PURGE acl CONNECT method CONNECT access_log /var/log/squid/access.log acl plasma_net src 192.168.1.2 acl plasma_net src 192.168.1.3 acl plasma_net src 192.168.1.4 acl plasma_net src 192.168.1.5 http_access allow plasma_net acl lan src 10.126.2.101 192.168.1.1 http_access allow localhost http_access allow lan http_access allow all http_access allow localnet http_access deny all acl ftp proto FTP http_access allow ftp http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports 375 http_access deny CONNECT !SSL_ports http_reply_access allow all icp_access allow all icp_access allow localnet icp_access deny all htcp_access allow localnet htcp_access deny all http_port 192.168.1.1:3128 transparent hierarchy_stoplist cgi-bin ? cache_mem 8 MB memory_replacement_policy lru cache_replacement_policy lru cache_dir ufs /var/cache/squid 100 16 256 minimum_object_size 0 KB maximum_object_size 4096 KB cache_swap_low 90 cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log emulate_httpd_log off ftp_passive on refresh_pattern ^ftp: 1440 20 10080 refresh_pattern ^gopher: 1440 0 1440 refresh_pattern (cgi-bin|\?) 0 0 0 refresh_pattern . 0 20 4320 always_direct allow all connect_timeout 2 minutes client_lifetime 1 days cache_mgr webmaster 376 visible_hostname plasma1 icp_port 3130 error_directory /usr/share/squid/errors/English coredump_dir /var/cache/squid cache_swap_high 95 ——————When any PC on network tries to use internet, I get following error in my access.log and —————————————————— 1249380227.459 10 192.168.1.4 TCP_REFRESH_UNMODIFIED/304 259 GET http://webmail1.cat.ernet.in/newmail/images/dotted_bullet.gif – DIRECT/10.11.100.123 1249380237.766 294 192.168.1.4 TCP_MISS/503 2419 GET http://www.google.com/ – DIRECT/209.85.231.104 text/html 1249380328.894 290 192.168.1.4 TCP_MISS/503 2468 GET http://www.google.com/ – DIRECT/209.85.231.104 text/html 1249380437.333 184 192.168.1.4 TCP_MISS/503 2350 GET http://www.yahoo.com/ – DIRECT/69.147.76.15 text/html ———————————————the user gets following error: while trying to retrieve the URL http://www.yahoo.com/ The following error was encountered: Connection to 69.147.76.15 Failed. The system returned: (101) Network is unreachable [whereas, i am able to access above url / ip from server] PLEASE, HELP me resolve this issue. Reply 190 Ajit Upadhyay August 4, 2009 Hi! I have a server with eth0 (10.126.2.101) connected to my ISP (proxy 10.31.31.10:3128 with authentication ie. userid/pwd) and eth1 (192.168.1.1) connected to local network through a fast ethernet switch. The server is also a DHCP sever for local network (192.168.1.2 – 192.168.1.254). Now, I have configured squid on this server so that local netwrok PCs can access internet thorugh my server (which is behind ISP’s authenticated proxy). The detail of squid.conf is listed below: 377 ——————– acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 10.0.0.0/8 acl localnet src 172.16.0.0/12 acl localnet src 192.168.1.1 acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl Safe_ports port 443 acl Safe_ports port 70 acl Safe_ports port 210 acl Safe_ports port 1025-65535 acl Safe_ports port 280 acl Safe_ports port 488 acl Safe_ports port 591 acl Safe_ports port 777 acl purge method PURGE acl CONNECT method CONNECT access_log /var/log/squid/access.log acl plasma_net src 192.168.1.2 acl plasma_net src 192.168.1.3 acl plasma_net src 192.168.1.4 acl plasma_net src 192.168.1.5 http_access allow plasma_net acl lan src 10.126.2.101 192.168.1.1 http_access allow localhost http_access allow lan http_access allow all http_access allow localnet http_access deny all acl ftp proto FTP http_access allow ftp http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports 378 http_reply_access allow all icp_access allow all icp_access allow localnet icp_access deny all htcp_access allow localnet htcp_access deny all http_port 192.168.1.1:3128 transparent hierarchy_stoplist cgi-bin ? cache_mem 8 MB memory_replacement_policy lru cache_replacement_policy lru cache_dir ufs /var/cache/squid 100 16 256 minimum_object_size 0 KB maximum_object_size 4096 KB cache_swap_low 90 cache_log /var/log/squid/cache.log cache_store_log /var/log/squid/store.log emulate_httpd_log off ftp_passive on refresh_pattern ^ftp: 1440 20 10080 refresh_pattern ^gopher: 1440 0 1440 refresh_pattern (cgi-bin|\?) 0 0 0 refresh_pattern . 0 20 4320 always_direct allow all connect_timeout 2 minutes client_lifetime 1 days cache_mgr webmaster visible_hostname plasma1 icp_port 3130 error_directory /usr/share/squid/errors/English coredump_dir /var/cache/squid cache_swap_high 95 ——————When any PC on network tries to use internet, I get following error in my access.log and —————————————————— 1249380227.459 10 192.168.1.4 TCP_REFRESH_UNMODIFIED/304 259 GET webmail1…. – DIRECT/10.11.100.123 1249380237.766 294 192.168.1.4 TCP_MISS/503 2419 GET http://www…/ – DIRECT/209.85.231.104 text/html 379 1249380328.894 290 192.168.1.4 TCP_MISS/503 2468 GET http://www…./ – DIRECT/209.85.231.104 text/html 1249380437.333 184 192.168.1.4 TCP_MISS/503 2350 GET http://www…/ – DIRECT/69.147.76.15 text/html ———————————————the user gets following error: while trying to retrieve the URL http://www…./ The following error was encountered: Connection to 69.147.76.15 Failed. The system returned: (101) Network is unreachable [whereas, i am able to access above url / ip from server] PLEASE, HELP me resolve this issue. Reply 191 Ajit Upadhyay August 4, 2009 further info: OS: openSuSE 11.0 Also, I have disabled firewall, as of now (MY ISP is highly secure / protected). Reply 192 Ajit Upadhyay August 4, 2009 I have also set in squid.conf ———————– cache_peer 10.31.31.10 parent 3128 0 no-query prefer_direct off ———————– where my ISP’s proxy is 10.31.31.10:3128 but the error still continues. Reply 193 Javier August 17, 2009 Hello worot exactly the script and got a problem I can not see my etho that connect with my local lan. How I can delete this script javier Reply 380 194 Javier August 18, 2009 After I complete the script I got a problem I can see the eth0 that is connected to my local network Reply 195 Marc August 18, 2009 Hello, I’m using a transparent proxy bridge, and I noticed that a download never completes and it always cuts, as to connection to the server is reset ! I’m using these rules in the firewall : ebtables -t broute -A BROUTING -p IPv4 –ip-protocol 6 –ip-destinationport 80 -j redirect –redirect-target ACCEPT iptables -t nat -A PREROUTING -i eth0 -p tcp –dport 80 -j REDIRECT – to-port 8080 iptables -t nat -A PREROUTING -i eth1 -p tcp –dport 80 -j REDIRECT – to-port 8080 iptables -t nat -A PREROUTING -i br0 -p tcp –dport 80 -j REDIRECT – to-port 8080 Where port 8080 is the dansguardian port for url filtering. Any idea why the connection resets ? It’s like a tcp reset is being done. Thanks. Reply 196 jac August 18, 2009 Ehy, pay attention kotnik’s sed trick delete ALL rows that CONTAIN a #, not just that START with # Reply 197 John September 3, 2009 Hi, I am running a transparent bridge with squid and dansguardian. I noticed that a download can never complete and I get the message “The connection with the server was reset” as soon as the download starts. Very small files ( < 1MB ) are hardly able to finish. Browsing is fine, the problem is only with the downloads and they always cut. Anybody's having a similar problem with a transparent bridge ? Appreciate your help solving this critical matter. 381 Thanks. John Reply 198 theleftfoot September 3, 2009 hey guys, i hope someone can help me out….i’ve got problems withe the following two steps: Save shell script. Execute script so that system will act as a router and forward the ports: # chmod +x /etc/fw.proxy # /etc/fw.proxy # service iptables save # chkconfig iptables on Start or Restart the squid: # /etc/init.d/squid restart # chkconfig squid on it doesn’t work! got these error test:/ # chmod +x /etc/fw.proxy test:/ # /etc/fw.proxy test:/ # service iptables save [b]service: no such service iptables[/b] test:/ # can someone help me out? cheers raffa Reply 199 Anant Patel September 18, 2009 hello!!! my collage server blocked many ports like 3128,8822,3127,8125,8130…so i cant access net..i have to use only collage provided net…what can i do?? they stop also ports in utorrent… plz help me.. thank u.. Reply 382 200 safdar azam September 24, 2009 hello. i am using Linux redhat version 3 and i have two lan port both are configured so i want to share my internet connection to winbee thin client. tell me how can connect with thinclient. plz i am witing Reply 201 Stolz October 7, 2009 AFAIK, the rule “iptables -A OUTPUT -o lo -j ACCEPT” is redundant because the default policy rule “iptables -P OUTPUT ACCEPT” already allows all outgoing traffic in all interfaces Reply 202 Baswaraj Ramshette November 13, 2009 Hi, I have followed whatever steps you have given in this article regarding transparent proxy configuration , I did everything according to your article I am getting following error please help me /etc/init.d/squid restart Stopping squid: 2009/11/13 12:42:28| parseConfigFile: line 4519 unrecognized: ‘httpd_accel_host virtual’ 2009/11/13 12:42:28| parseConfigFile: line 4520 unrecognized: ‘httpd_accel_port 80′ 2009/11/13 12:42:28| parseConfigFile: line 4521 unrecognized: ‘httpd_accel_with_proxy on’ 2009/11/13 12:42:28| parseConfigFile: line 4522 unrecognized: ‘httpd_accel_uses_host_header on’ . [ OK ] Starting squid: . [ OK ] On client side The requested url could not be retrive . Reply 203 Jeffry November 25, 2009 I need help, I use Ubuntu Jaunty 9.04, want to configure Squid, and everyting is okey, cause I took a proxy 1.1.1.1:3128 in every browser. but if i want to make the squid being transparent. i still get nothing. all i do is 383 just put transparent next http_port 3128 . and few configuration like above. then put iptables like as usuall.. iptables -t nat -A PREROUTING -p tcp –dport 80 -j REDIRECT –to-port 3128 and in ubuntu, the iptables version is 1.1.4.1 please advice… my hair become “fall season” :`( Reply 204 e December 9, 2009 how do i get on myspace from school Reply 205 Live December 15, 2009 Does anybody’s question ever get answered in this tutorial? This tutorial is obsolete in later versions of SQUID! Reply 206 Sye MUshtaq Ahmed December 24, 2009 Hello, Really the guide is wonderful and it worked 100% for me and even the clients using it are amazed with its speed. But there is one problem now !!! When client access Email, like yahoo and hotmail any others in i.e: massege will show after few seconds this page can’t be dis[layed plz solve my problem ASAP REGARDS Reply 207 Sam December 31, 2009 Hello, I facing a problem when setup the server as router. My client can ping to eth 1 and eth 0 succesfully. However the client can’t browse internet through proxy servy (eth 0). For your information, i setup the proxy server follow exactly what was writen hre. May i know what is the problem? Thanks ! Reply 208 Devinka January 16, 2010 384 HI , Thanks for the howto . it works fine . Reply 209 Lalit Kumar January 16, 2010 Hi All, i have a issue with my transparent squid server it is working transparet for it’s own subnet or vlan systems . Like my sqy=uid server ip is 172.16.110.24 and it;s working fine for a system with ip 172.16.110.22 . but it is not working transparently for other systems like 172.16.119.37 and 172.16.122.43 i add acl mynet src 172.16.110.0 /24 172.16.119.0/24 http_access allow mynet . but it is working only for same vlan systems why ? can anyone help me out in this issue Reply 210 gopi chand January 19, 2010 where can I add the following line in squid.conf . please help me anybody .the line are httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on acl lan src 192.168.1.1 192.168.2.0/24 http_access allow localhost http_access allow lan Reply 211 Kartik Vashishta February 4, 2010 So I have to enable IP rotuing for this to work, what is the command to do that…tell eth0 to route to eth1? Reply 212 bobzi February 12, 2010 385 Dear LINUXTITLI I configured Squid 2.5 with your configuration. Everything is fine but HTTPS sites don’t accept request. I’ve tried several times to open HTTPS (SSL Port) in iptables by some different commands, however I still have problem. On the other hands, when I set Proxy in Internet Option tab, clients can open Secure sites, when I erase the proxy setting only the secure site has a problem to login. And also I need setup clients without any setting in browser for some reasons. Actually I have a serious problem in this setting. I need some help. Could you please give a solution?! Dear LINUXTITLI or somebody else. I will be grateful. Many thanks Reply 213 Fredl February 12, 2010 Hi, kotnik’s magic filter in posting #4 ignores the greediness of sed. His code will hide any lines containing a ‘#’ (and following comment) somewhere in them. This will reflect an uncomplete setup. Better use this grep-only command: grep -vE ‘^#|^*$’ /etc/squid3/squid.conf To all the help-seekers here: Better try a suitable forum for your questions, a blog like this one is far from being a perfect platform for helping with configuration mistakes. Regards, Fredl. Reply 214 Fredl February 12, 2010 NB: Sorry, forgot to say “thank you” for the fine tutorial, LINUXTITLI! :) @Lalit Kumar: try acl mynet src 172.16.110.0/24 172.16.119.0/24 172.16.122.0/24 or simplier (but less restrictive): acl mynet src 172.16.0.0/16 Most of the others here have some typos, too… 386 Reply 215 Manoj February 15, 2010 I configured RHEL5 squid server as an proxy server in windows envirnoment, it give me an problem for outlook express & for Ms outlook that users on windows side are not able to send & recieve their e-mails. However i have open the safe ports & iptable rule’s. Also, i want to configure an squid server as an proxy server in such way that some of the users are not able to access the specific web sites but some users are able to access same websites. While users get their IP’s from DHCP server. Reply 216 saltio May 12, 2010 outlook express & for Ms outlook that users on windows side are not able to send & recieve their e-mails. What are the commands to open the safe ports & iptable rule’s. Thanks for the setup – this will save alot of time. Reply 217 vikram February 24, 2010 I have always noticed one thing, while going for transparent squid or IP MASQUERADING, i always have to keep by named service on. and specify the DNS ip settings in client. Is dns necessary. because we dont need that in normal squid (non-transparent). Kindly Guide Reply 218 bezt March 4, 2010 can U tell me how i configure my iptables to non-transparen proxy Thx b4 regards Reply 219 Sharon March 9, 2010 Hi i am very bad at Linux and failed many a time, but want to setup a similar system including web content filtering using dansgaurdian package. This system is intented for use in non-profit organisations with which i am 387 associated. If somebody could spare some time to setup this system please mail me back at my email address [email protected] Best Regards, Sharon. Reply 220 Anil March 19, 2010 I want to setup squid proxy servers ( three ) with one gateway server. I know it can be done by linux LVS. can somebody give me detailed howto or step by step guide to setup this. Thanks in advance Reply 221 Nick April 9, 2010 Please Help, i have installed and configured squid-3.1.1 on open suse 10.2 but and it starts well but for some reason client machines cant access internet through squid, I have one LAN port connected to the switch and i want all computers to use it as a proxy server with port 8080. Do i need to install Apache as well?..Below are the configurations acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 acl mrc src 10.0.1.0/24 acl localnet src 10.0.0.0/24 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports 388 acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access allow safe_ports http_access deny CONNECT !SSL_ports http_access allow localnet http_access allow localhost http_access allow mrc http_access deny all icp_access allow localnet icp_access deny all htcp_access allow localnet http_port 3128 http_port 8080 hierarchy_stoplist cgi-bin ? cache_dir ufs /usr/local/squid/var/cache 1000 16 256 access_log /usr/local/squid/var/logs/cache.log squid cache_access_log /usr/local/squid/var/logs/access.log squid cache_store_log /usr/local/squid/var/logs/store.log squid cache_store_log /usr/local/squid/var/logs/store.log squid coredump_dir /usr/local/squid/var/cache coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 cache_mgr root visible_hostname mskproxy.mrcuganda.org icp_port 3130 always_direct allow all cache_effective_user squid cache_effective_group squid htcp_port 4827 cache_mgr [email protected] Reply 389 222 ammar ali April 13, 2010 i need all proxy seting Reply 223 Sarmed Rahman April 18, 2010 a million thanks ^_^ Reply 224 Prasad May 13, 2010 thanks for the info. i was really in need of this. Reply 225 hmtum01 May 19, 2010 how can i block user according to the mac address filtering in trasparent squid proxy. which is the version of that squid Reply 226 rocky May 31, 2010 thanks Reply 227 Alex Y. Telkov (Russia) June 2, 2010 Thank a lot! I have a problem with Total Commander while users from local net try to access FTP resources. I have classic architecture in local HQ lan “LAN — Linux-router — CISCO 871-k9 — Internet”. I apologize, You approach in solving FTPport-error problem helps me to solve my situation. If my “server-under-construction” be turned on at moment, I start to emplement You solution remotely immideatly! :) Reply 228 Pradip Raut Chhetri June 6, 2010 I have done everything, 3 easy steps for transparent proxy but every time i restart the squid, i m gettin error regarding followin’:- 390 httpd_accel_host virtual httpd_accel_port 80 httpd_accel_with_proxy on httpd_accel_uses_host_header on Help me, Do i have to set up httpd server before configuring your “3 easy steps transparent proxy”. Thank YOU Reply 229 gbrane June 14, 2010 Important !!!!! for Ubuntu users !!! in /etc/sysctl.d/10-network-security.conf must be comment !! net.ipv4.conf.default.rp_filter=1 net.ipv4.conf.all.rp_filter=1 i lost one month to solve this problem !!!!!! Reply 230 DEEPAK June 30, 2010 any budy help for the linux firewall configure this is first time using please help how to configure give some link either commond send. Reply Leave a Comment Name * E-mail * Website 391 Notify me of followup comments via e-mail. Submit 92 0 Tagged as: /etc/squid/squid.conf, httpd accel host, httpd accel port, httpd accel uses host header, httpd accel with proxy, httpd accelerator, Iptables, proxy httpd, router server, squid configuration, squid server, transparent proxy Previous post: MySQL Database Runs 60 to 90 Percent Faster on Solaris 10 Than on Red Hat Linux Next post: Interview: Red Hat’s open source scholarship challenge Sign up for our daily email newsletter: Enter your em Nixcraft-LinuxFre en_US Sign Up V Access Control and Access Control Operators From Squid User's Guide Jump to: navigation, search Access control lists (acls) are often the most difficult part of the configuration of a Squid cache: the layout and concept is not immediately obvious to most people. Hang on to your hat! Unless Chapter 4 is still fresh in your mind, you may wish to skip back and review the access control section of that chapter before you continue. This chapter assumes that you understood the difference between an acl and an acl-operator. 392 Contents [hide] 1 Uses of ACLs 2 Access Classes and Operators 3 Acl lines o 3.1 A unique name o 3.2 Type o 3.3 Decision String o 3.4 Types of acl 3.4.1 Source/Destination IP address 3.4.2 Source/Destination Domain 3.4.3 Words in the requested URL 3.4.3.1 A Quick introduction to regular expressions 3.4.3.2 Using Regular expressions to match words in the requested URL 3.4.3.3 Words in the source or destination domain 3.4.4 Current day/time 3.4.5 Destination Port 3.4.6 Protocol (FTP, HTTP, SSL) 3.4.7 HTTP Method (GET, POST or CONNECT) 3.4.8 Browser type 3.4.9 Username 3.4.10 Autonomous System (AS) Number 3.4.11 Username and Password 3.4.12 Using the NCSA authentication module 393 3.4.13 Using the SMB authentication module 3.4.14 Using the RADIUS authentication module 3.4.15 SNMP Community 4 Acl-operator lines o 4.1 The other Acl-operators 4.1.1 The no_cache acl-operator 4.1.2 The ident_lookup_access acl-operator 4.1.3 The miss_access acl-operator 4.1.4 The always_direct and never_direct acl-operators 4.1.5 The broken_posts acl-operator 5 SNMP Configuration o 5.1 Querying the Squid SNMP server on port 3401 o 5.2 Running multiple SNMP servers on a cache machine 5.2.1 Binding the SNMP server to a non-standard port 5.2.2 Access Control with more than one Agent 6 Delay Classes o 6.1 Slowing down access to specific URLs o 6.2 The First Pool Class o 6.3 The Second Pool Class o 6.4 The Third Pool Class o 6.5 Using Delay Pools in Real Life 7 Conclusion Uses of ACLs 394 The primary use of the acl system is to implement simple access control: to stop other people using your cache infrastructure. (There are other uses of acls, described later in this chapter; in the meantime we are going to discuss only the access control function of acls.) Most people implement only very basic access control, denying access to people that are not on their network. Squid's access system is incredibly flexible, but 99% of administrators only use the most basic elements. In this chapter some examples of the less common uses of acls are covered: hopefully you will discover some Squid feature which suits your organization - and which you didn't think was part of Squid before. Access Classes and Operators There are two elements to access control: classes and operators. Classes are defined with the acl squid.conf tag, while the names of the operators vary: the most common operator used is http_access. Let's work through the below example line-by-line. Here, a systems administrator is in the process of installing a cache, and doesn't want other staff to access it while it's being installed, since it's likely to ping-pong up and down during the installation. Once the administrator is happy with the config, the whole network will be allowed access. The admin's PC is at the IP 10.0.0.3. If the admin connects to the cache from the PC, Squid does the following: Accepts the (HTTP) connection and reads the request Checks the line that reads http_access allow myIP. Since your IP address matches the IP defined in the myIP acl, access is allowed. Remember that Squid drops out of the operator list on the first match. If you connect from a different PC (on the 10.0.*.* network) things are very similar: Accepts the connection and reads the request The source of the connection doesn't match the myIP acl, so the next http_access line is checked. The myNet acl matches the source of the connection, so access is denied. An error page is returned to the user instead of the requested page. If someone reaches your cache from another netblock (from, say, 192.168.*.*), the above access list will not block access. The reason for this is quite 395 complicated. If Squid works through a set of acl-operators and finds no match, it defaults to using the opposite of the last match (if the previous operator is an allow, the default is to deny; if it's a deny, the default is to allow). This seems a bit strange at first, but let's look at an example where this behaviour is used: it's more sensible than it seems. The following acl example is nice and simple: it's something a first-time cache admin could create. A config file with no access lists will allow cache access without any restrictions. An administrator using the above access lists obviously wishes to allow only his network access to the cache. Given the Squid behavior of inverting the last decision, we have an invisible line reading http_access deny all Inverting the last decision is a simple (if not immediately obvious) solution to one of the most common acl mistakes: not adding a final deny all to the end of your acl list. With this new knowledge, have a look at the first example in this chapter: you will see why I said not to use it in your configs. Given that the last operator denies the local network, local people will not be able to access the cache. The remainder of the Internet, however, will! As discussed in Chapter 4, the simplest way of creating a catch-all acl is to match requests when they come from any IP address. When programs do netmask arithmetic a subnet of all zeros will match any IP address. A corrected version of the first example dispenses with the myNet acl. Once the cache is considered stable and is moved into production, the config would change. http_access lines do add a very small amount of overhead, but that's not the only reason to have simple access rulesets: the fewer rulesets, the easier your setup is to understand. The below example includes a deny all rule although it doesn't really need one: you may know of the automatic inversion of the last rule, but someone else working on the cache may not. You should always end your access lists with an explicit deny. In Squid-2.1 the default config file does this for you when you insert your HTTP acl operators in the appropriate place. Acl lines The Examples so far have given you an idea of an acl line's layout. Their layout can be symbolized as follows (? Check! ?): 396 acl name type (string|"filename") [string2] [string3] ["filename2"] The acl tag consists of a minimum of three fields: a unique name; an acl type and a decision string. An acl line can have more than one decision string, hence the [string2] and [string3] in the line above. A unique name This is supposed to be descriptive. Use a name such as customers or mynet. You have seen this lots of times before: the word myNet in the above example is one such case. There must only be one acl with a given name; if you find that you have two or more classes with similar names, you can append a number to the name: customer1, customer2 etc. I generally avoid this, instead putting all similar data on these classes into a file, and including the whole file as one acl. Check the Decision String section for some more info on this. Type So far we have discussed only acls that check the source IP address of the connection. This isn't sufficient for many people: it may be useful for you to allow connections at only certain times, or to only specific domains, or by only some users (using usernames and passwords). If you really want to, you can even combine all of the above: only allow connections from users that have the right password, have the right destination and are going to the right domain. There are quite a few different acl types: the next section of this chapter discusses all of the different types in detail. In the meantime, let's finish the description of the structure of the acl line. Decision String The acl code uses this string to check if the acl matches a given connection. When using this field, Squid checks the type field of the acl line to decide how to use the decision string. The decision string could be an IP address range, a regular expression or a list of domains or more. In the next section (where we discuss the types of acls available) we discuss the different forms of the Decision String. 397 If you have another look at the formal definition of the acl line above, you will note that you can have more than one decision string per acl line. Strings in this format are ORd together; if you were to specify two IP address ranges on the same line the return result of the acl would be true if either of the IP addresses match. (If source strings were ANDd together, then an incoming request would have to come from two IP address ranges at the same time. This is not impossible, but would almost certainly be pointless.) Large decision lists can be stored in files, so that your squid.conf doesn't get cluttered. Some of the caches I have worked on have had in the region of 2000 lines of acl rules, which could lead to a very cluttered squid.conf file. You can include a file into the decision section of an acl list by placing the filename (with path) in double-quotes. The file simply contains the data set; one datum per line. In the next example the file /usr/local/squid/conf/data/myNets can contain any number of IP ranges, one range per line. While on the topic of long lists of acls: it's important to note that you can end up slowing your cache response with very long lists of acls. Checking acls requires CPU time, and long lists can decrease cache performance, since instead of moving data to clients Squid is busy checking access lists. What constitutes a long list? Don't worry about lists with a few hundred entries unless you have a really slow or busy CPU. Lists thousands of lines long can, however, cause problems. Types of acl So far we have only spoken about acls that filter by source IP address. There are numerous other acl types: Source/Destination IP address Source/Destination Domain Regular Expression match of requested domain Words in the requested URL Words in the source or destination domain Current day/time Destination port Protocol (FTP, HTTP, SSL) Method (HTTP GET or HTTP POST) 398 Browser type Name (according to the Ident protocol) Autonomous System (AS) number Username/Password pair SNMP Community Source/Destination IP address In the examples earlier in this chapter you saw lines in the following format: acl myNet src 10.0.0.0/255.255.0.0 http_access allow myNet The above acl will match when the IP address comes from any IP address between 10.0.0.0 and 10.0.255.255. In recent years more and more people are using Classless Internet Domain Routing (CIDR) format netmasks, like 10.0.0.0/16. Squid handles both the traditional IP/Netmask and more recent IP/Bits notation in the src acl type. IP ranges can also be specified in a further format: one that is Squid specific. (? I need to spend some time hacking around with these: I am not sure of the layout ?) acl myNet src addr1-addr2/netmask http_access allow myNet Squid can also match connections by destination IP. The layout is very similar: simply replace src with dst. Here are a couple of examples: Source/Destination Domain Squid can also limit requests by their source domain. Though it doesn't always happen in the real world, network administrators can add reverse DNS entries for each of the hosts on their network. (These records are normally referred to as PTR records.) Squid can make decisions about the validity of incoming requests by checking their reverse DNS entries. In the below example, the acl is true if the request comes from a host with a reverse entry that is in either the qualica.com or squid-cache.org domains. acl myDomain srcdomain .qualica.com .squid-cache.org acl allow myDomain 399 Reverse DNS matches should not be used where security is important. A determined attacker (who controlled the reverse DNS entries for the attacking host) would be able to manipulate these entries so that the request comes from your domain. Squid doesn't attempt to check that reverse and forward DNS entries match, so this option is not recommended. Squid can also be configured to deny requests to specific domains. Many people implement these filter lists for pornographic sites. The legal implications of this filtering are not covered here: there are many, and the relevant law is in a constant state of flux, so advice here would likely be obsolete in a very short period of time. I suggest that you consult a good lawyer if you want to do something like this. The dst acl type allows one to match accesses by destination domain. This could be used to match urls for popular adult sites, and refuse access (perhaps during specific times). If you want to deny access to a set of sites, you will need to find out these site's IP addresses, and deny access to these IP addresses too. If you just put the URL Domain name in, someone determined to access a specific site could find out the IP address associated with that hostname and access it by entering the IP address in their browser. The above is best described with an example. Here, I assume that you want to restrict access to the site www.adomain.example. If you use either the host of nslookup commands, you would find that this server has the IP address 10.255.1.2. It's easiest to just have two acls: one for IPs and one for domains. If the lists get too large, you can simply place them in a file. Words in the requested URL Most caches can filter out URLs that contain a set of banned words. Regular expressions allow you to simply check if a word is in a given URL, but they also allow for more powerful searches of the URL. With a simple word check you would find it nearly impossible to create a rule that allows access to sites with the word sex in the URL, but at the same time denies access to all avi files on that site. With regular expressions this sort of checking becomes easy, once you understand the regex syntax. A Quick introduction to regular expressions We haven't encountered regular expressions in this book yet. A regular expression (regex) is an incredibly useful way of matching strings. As they are incredibly 400 powerful they can get a little complicated. Regexes are often used in stringoriented languages like Perl, where they make processing of large text files (such as logs) incredibly easy. Squid uses regular expressions for numerous things: refresh patterns and access control among them. If you have not used regular expressions before, you might want to have a look at the O'Reilly book on regular expressions or the appropriate section in the O'Reilly perl book. Instead of going into detail here, I am just going to give some (hopefully) useful examples. If you have perl installed on your machine, you could have a look at the perlre manual page to get an idea as to how the various regex operators (such as .) function. Regular expressions in Squid are case-sensitive by default. If you want to match both upper or lower-case text, you can prefix the regular expression with a -i. Have a look at the next example, where we use this to match either sex SEX (or even SeX). Using Regular expressions to match words in the requested URL Using regular expressions allows you to create more flexible access lists. So far you have only been able to filter sites by destination domain, where you have to match the entire domain to deny access to the site. Since regular expressions are used to match text strings, you can use them to match words, partial words or patterns in URLs or domains. The most common use of regex filters in ACL lists is for the creation of farreaching site filters: if the url or domain contain a set of banned words, access to the site is denied. If you wish to deny access to sites that contain the word sex in the URL, you would add one acl rule, rather than trying to find every site that has adult material on it. The big problem with regex filters is that not all sites that contain the word sex in the URL are pornographic. By denying these sites you are likely to be infringing people's rights, and you should refer to a lawyer for advice on the legality of this. Creating a list of sites that you don't want accessed can be tedious. There are companies that sell adult/unwanted material lists which plug into Squid, but these can be expensive. If you cannot justify the cost, you can The url_regex acl type is used to match any word in the URL. Here is an example: In places where bandwidth is very expensive, system administrators may have no problem with people visiting pornograpic sites. They may, however, want to stop people downloading huge avi files from these sites. The following example would 401 deny downloads of avi files from sites that contain the word sex in the URL. The regular expression below matches any URL that contains the word sex AND ends with .avi. The urlpath_regex acl strips off the url-type and hostname, checking instead only the path and filename. Words in the source or destination domain Regular expressions can also be used for checking the source and destination domains of a request. The srcdom_regex tag is used to check that a request comes from a specific subdomain, while the dstdom_regex checks the domain part of the requested URL. (You could check the requested domain with a url_regex tag, but you could run into interesting problems with sites that refer to pages with urls like http://www.company.example/www.anothersite.example.) Here is an example acl set that uses a regular expression (rather than using the srcdomain and dstdomain tags). This example allows you to deny access to .com or .net sites if the request is from the .za domain. This could be useful if you are providing a "public peering" infrastructure to other caches in your geographical region. Note that this example is only a fragment of a complete acl set: you would presumably want your customers to be able to access any site, and there is no final deny acl. acl bad_dst_TLD dstdom_regex \.com$ \.net$ acl good_src_TLD srcdom_regex \.za$ # allow requests FROM the za domain UNLESS they want to go to \.com or \.net http_access deny bad_dst_TLD http_access allow good_src_TLD Current day/time Squid allows one to allow access to specific sites by time. Often businesses wish to filter out irrelevant sites during work hours. The Squid time acl type allows you to filter by the current day and time. By combining the dstdomain and time acls you can allow access to specific sites (such as your the sites of suppliers or other associates) during work hours, but allow access to other sites after work hours. The layout is quite compact: 402 acl name time [day-list] [start_hour:minute-end_hour:minute] Day list is a list of single characters indicating the days that the acl applies to. Using the first letter of the day would be ambiguous (since, for example, both Tuesday and Thursday start with the same letter). When the first letter is ambiguous, the second letter is used: T stands for Tuesday, H for Thursday. Here is a list of the days with their single-letter abreviations: S - Sunday M - Monday T - Tuesday W - Wednesday H - Thursday F - Friday A Saturday Start_hour and end_hour are times written in 24-hour ("military") time (17:00 instead of 5:00). End_hour must always be larger than start_hour. Unfortunately, this means that you can't simply write: acl darkness 17:00-6:00 # won't work You have to specify two separate ranges: acl night time 17:00-24:00 acl early_morning time 00:00-6:00 As you can see from the original definition of the time acl, you can specify the day of the week (with no time), the time (with no day), or both the time and day (?check!?). You can, for example, create a rule that specifies weekends without specifying that the day starts at midnight and ends at the following midnight. The following acl will match on either Saturday or Sunday. acl weekends time SA The following example is too basic for real-world use. Unfortunately, creating a good example requires some of the more advanced features of the http_access line; these are covered (with examples) in the next section of this chapter. Destination Port Because of the design of the HTTP protocol, people can connect to things like IRC servers through your cache servers, even though the two protocols are very different. The same problems can be used to tunnel telnet connections through your cache server. The part of HTTP that allows this is the CONNECT method, mainly used for securing https connections with SSL. Since you generally don't want to proxy anything other than the standard supported protocols, you can restrict the ports that your cache is willing to connect to. Web servers almost always listen for incoming requests on port 80. 403 Some servers (notably site-specific search engines and unofficial sites) listen on other ports, such as 8080. Other services (such as IRC) also use highnumbered ports. The default Squid config file limits standard HTTP requests to the port ranges defined in the Safe_ports squid.conf acl. SSL CONNECT requests are even more limited, allowing connections to only ports 443 and 563. However, keep in mind that these port assignments are only a convention and nothing prevents people from hosting (on machines they control) any type of server on any port they choose. Port ranges are limited with the port acl type. If you look in the default squid.conf, you will see lines like: acl SSL_ports port 443 563 acl Safe_ports port 80 21 443 563 70 210 1025-65535 The format is pretty straightforward: a destination port of 443 or 563 is matched by the first acl, while 80, 21, 443, etc. by the second line. The most complicated section of the examples above is the end of the line: the text that reads "102565535". The "-" character is used in squid to specify a range. The example thus matches any port from 1025 all the way up to 65535. These ranges are inclusive, so the second line matches ports 1025 and 65535 too. The only low-numbered ports which Squid should need to connect to are 80 (the HTTP port), 21 (the FTP port), 70 (the Gopher port), 210 (wais) and the appropriate SSL ports. All other low-numbered ports (where common services like telnet run) do not fall into the 1024-65535 range, and are thus denied. The following http_access line denies access to URLs that are not in the correct port ranges. You have not seen the ! http_access operator before: it inverts the decision. The line below would read "deny access if the request does not fall in the range specified by acl Safe_ports" if it were written in english. If the port matches one of those specified in the Safe_ports acl line, the next http_access line is checked. More information on the format of http_access lines is given in the next section Acl-operator lines. http_access deny !Safe_ports Protocol (FTP, HTTP, SSL) Some people may wish to restrict their users to specific protocols. The proto acl type allows you to restrict access by the URL prefix: the http:// or ftp:// bit at the front. The following example will deny requests that use the FTP protocol. 404 The default squid.conf file denies access to a special type of URL, those which use the cache_object protocol. When Squid sees a request for one of these URLs it serves up information about itself: usage statistics, performance information and the like. The world at large has no need for this information, and it could be a security risk. HTTP Method (GET, POST or CONNECT) HTTP can be used for downloading (GETting data) or uploads (POSTing data to a site). The CONNECT mode is used for SSL data transfers. When a connection is made to the proxy the client specifies what kind of request (called a method) it is sending. A GET request looks like this: GET http://www.qualica.com/ HTTP/1.1 blank-line If you were connecting using SSL, the GET word would be replaced with the word CONNECT. You can control what methods are allowed through the cache using the post acl type. The most common use is to stop CONNECT type requests to non-SSL ports. The CONNECT method allows data transfer in any direction at any time: if you telnet to a badly configured proxy, and enter something like: CONNECT www.domain.example:23 HTTP/1.1 blank-line you might end up with a telnet connection to www.domain.example just as if you had telnetted there from the cache server itself. This can be used get around packet-filters, firewall access lists and passwords, which is generally considered a bad thing! Since CONNECT requests can be quite easily exploited, the default squid.conf denies access to SSL requests to non-standard ports (as described in the section on the port acl-operator.) Let's assume that you want to stop your clients from POSTing to any sites (note that doing this is not a good idea, since people using some search engines (for example) would run into problems: at this stage this is just an example. (?TODO: Example) 405 Browser type Companies sometimes have policies as to what browsers people can use. The browser acl type allows you to specify a regular expression that can be used to allow or deny access.. Username Logs generally show the source IP address of a connection. When this address is on a multiuser machine (let's use a Unix machine at a university as an example) you cannot pin down a request as being from a specific user. There could be hundreds of people logged into the Unix machine, and they could all be using the cache server. Trying to track down a misbehaver is very difficult in this case, since you can never be sure which user is actually doing what. To solve this problem, the ident protocol was created. When the cache server accepts a new connection, it can call back to the origin server (on a low-numbered port, so the reply cannot be faked) to find out who's on the other end of the connection. This doesn't make any sense on single-user systems: people can just load their own ident servers (and become daffy duck for a day). If you run multi-user systems then you may want only certain people on those machines to be able to use the cache. In this case you can use the ident username to allow or deny access. One of the best things about Unix is the flexibility you get. If you wanted (for example) only students in their second year on to have access to the cache servers via your Unix machines, you could create a replacement ident server. This server could find out which user that has connected to the cache, but instead of returning the username you could return a string like "third_year" or "postgrad". Rather than maintaining a list of which students are in on both the cache server and the central Unix system, you could simple Squid rules, and the ident server could do all the work where it checks which user is which. Autonomous System (AS) Number Squid is often used by large ISPs. These ISPs want all of their customers to have access to their caches without having incredibly long manually-maintained ACL lists (don't forget that such long lists of IPs generally increase the CPU usage of Squid too). Large ISP's all have AS (Autonomous System) numbers which are used by other Internet routers which run the BGP (Border Gateway Protocol) routing protocol. The whois server whois.ra.net keeps a (supposedly authoritive) list of all the IP ranges that are in each AS. Squid can query this server and get a list of all IP 406 addresses that the ISP controls, reducing the number of rules required. The data returned is also stored in a radix tree, for more cpu-friendly retrieval. Sometimes the whois server is updated only sporadically. This could lead to problems with new networks being denied access incorrectly. It's probably best to automate the process of adding new IP ranges to the whois server if you are going to use this function. If your region has some sort of local whois server that handles queries in the same way, you can use the as_whois_server Squid config file option to query a different server. Username and Password If you want to track Internet usage it's best to get users to log into the cache server when they want to use the net. You can then use a stats program to generate peruser reports, no matter which machine on your network a person is using. Universities and colleges often have labs with many machines, where it is difficult to tell which user is sitting in front of a machine at any specific time. By using names and passwords you will solve this problem. Squid uses modules to do user authentication, rather than including code to do it directly. The default Squid source does, however, include two standard modules; The first authenticates users from a file, the other uses SMB (MS Windows) authentication. Since these modules are not compiled when you compile Squid itself, you will need to cd to the appropriate source directory (under auth_modules) and run make. If the compile goes well, a make install will place the program file in the /usr/local/squid/bin/ directory and any config files in the /usr/local/squid/etc/ directory. NCSA authentication is the easiest to use, since it's self contained. The SMB authentication program requires that Samba (samba.org) be installed, since it effectively talks to the SMB server through Samba. The squid.conf file uses the authenticate_program tag to decide which external program to use to authenticate users. If Squid were to only start one authentication program, a slow username/password lookup could slow the whole cache down (while all other connections waited to be authenticated). Squid thus opens more than one authentication program at a time, sending pending requests to the second when the first is busy, the third when the second is and so forth. The actual number started is specified by the authenticate_children squid.conf value. The default is five, but you will probably need to increase this for a heavily loaded cache server. 407 Using the NCSA authentication module To use the NCSA authentication module, you will need to add the following line to your squid.conf: authenticate_program /usr/local/squid/bin/ncsa_auth /usr/local/squid/etc/passwd You will also need to create the appropriate password file (/usr/local/squid/etc/passwd in the example above). This file consists of a username and password pair, one per line, where the username and password are seperated by a colon (:), just as they are in a Unix /etc/passwd file. The password is encrypted with the same function as the passwords in /etc/passwd (or /etc/shadow on newer systems) are. Here is an example password line: oskar:lKdpxbNzhlo.w Since the encrypted passwords are the same, and the ncsa_auth module understands the /etc/passwd or /etc/shadow file format, you could simply copy the system password file periodically. If your users do not already have passwords in Unix crypt format somewhere, you will have to use the htpasswd program (in /usr/local/squid/bin/) to generate the appropriate user and password pairs. Using the SMB authentication module Very Simple... authenticate_ip_ttl 5 minutes auth_param basic children 5 auth_param basic realm Servidor de Autenticacion! auth_param basic program /usr/lib/squid/smb_auth -W work_group -I server_name Using the RADIUS authentication module Once you have compiled (./compile & make & make install) "Squid_radius_auth" (you can get a copy here: http://www.squid-cache.org/contrib/squid_radius_auth/), you must add this follow line to squid.conf (for basic auth): acl external_traffic proxy_auth REQUIRED http_access allow external_traffic 408 auth_param basic program /usr/local/squid/libexec/squid_radius_auth -f /usr/local/squid/etc/squid_radius_auth.conf auth_param basic children 5 auth_param basic realm This is the realm auth_param basic credentialsttl 45 minutes After you have added this parameter you must edit /usr/local/squid/etc/squid_radius_auth.conf and change the default hostname of RADIUS server hostname (or IP) and change the key. Restart squid for it to take effect. SNMP Community If you have configured Squid to support SNMP, you can also create acls that filter by the requested SNMP community. By combining source address (with the src acl type) and community filters (using the snmp_community acl type) you can restrict sensitive SNMP queries to administrative machines while allowing safer queries from the public. SNMP setup is covered in more detail later in the chapter, where we discuss the snmp_access acl-operator. Acl-operator lines Acl-operators are the other half of the acl system. For each connection the appropriate acl-operators are checked (in the order that they appear in the file). You have met the http_access and icp_access operators before, but they aren't the only Squid acl-operators. All acl-operator lines have the same format; although the below format mentions http_access specifically, the layout also applies to all the other acl-operators too. http_access allow|deny [!]aclname [& [!]aclname2 ... ] <<note: field-testing above on old squid 2.3 suggests "&" must be omitted>> Let's work through the fields from left to right. The first word is http_access, the actual acl-operator. The allow and deny words come next. If you want to deny access to a specific class of users, you can change the customary allow to deny in the acl line. We 409 have seen where a deny line is useful before, with the final deny of all IP ranges in previous examples. Let's say that you wanted to deny Internet access to a specific list of IP addresses during the day. Since acls can only have one type per acl, you could not create an acl line that matches an IP address during specific times. By combining more than one acl per acl-operator line, though, you get the same effect. Consider the following acls: acl dialup src 10.0.0.0/255.255.255.0 acl work time 08:00-17:00 If you could create an acl-operator that was matched when both the dialup and work acls were true, clients in the range could only connect during the right times. This is where the aclname2 in the above acl-operator definition comes in. When you specify more than one acl per acl-operator line, both acls have to be matched for the acl-operator to be true. The acl-operator function AND's the results from each acl check together to see if it is to return true of false. You could thus deny the dialup range cache access during working hours with the following acl rules: You can also invert an acl's result value by using an exclamation mark (the traditional NOT value from many programming languages) before the appropriate acl. In the following example I have reduced Example 6-4 into one http_access line, taking advantage of the implicit inversion of the last rule to deny access to all clients. Since the above example is quite complicated, let's cover it in more detail: In the above example an IP from the outside world will match the 'all' acl, but not the 'myNet' acl; the IP will thus match the http_access line. Consider the binary logic for a request coming in from the outside world, where the IP is not defined in the myNet acl. Deny http access if ((true) & (!false)) If you consider the relevant matching of an IP in the 10.0.0.0 range, the myNet value is true, the binary representation is as follows: Deny http access if ((true) & (!true)) A 10.0.0.0 range IP will thus not match the only http_access line in the squid config file. Remembering that Squid will default to the opposite of the last match in the file, accesses will be allowed from the myNet IP range. 410 The other Acl-operators You have encountered only the http_access and icp_access acl-operators so far. Other acl-operators are: no_cache ident_lookup_access miss_access always_direct, never_direct snmp_access (covered in the next section of this chapter) delay_classes (covered in the next section of this chapter) broken_posts The no_cache acl-operator The no_cache acl-operator is used to ensure freshness of objects in the cache. The default Squid config file includes an example no_cache line that ejects the results of cgi programs from the cache. If you want to ensure that cgi pages are not cached, you must un-comment the following lines from squid.conf: acl QUERY urlpath_regex cgi-bin \\? no_cache deny QUERY The first line uses a regular expression match to find urls that have cgi-bin or ? in the path (since we are using the urlpath_regex acl type, a site with a name like cgi-bin.qualica.com will not be matched.) The no_cache acl-operator is then used to eject matching objects from the cache. The ident_lookup_access acl-operator Earlier we discussed using the ident protocol to control cache access. To reduce network overhead, Squid does an ident lookup only when it needs to. If you are using ident to do access control, Squid will do an ident lookup for every request, and you don't have to worry about this acl-operator. Many administrators would like to log the the ident value for connections without actually using it for access control. Squid used to have a simple on/off switch for 411 ident lookups, but this incurred extra overhead for the cases where the ident lookup wasn't useful (where, for example, the connection is from a desktop PC). Let's consider some examples. Assume that a you have one Unix server (at IP address 10.0.0.3), and all remaining IP's in the 10.0.0.0/255.255.255.0 range are desktop PC's. You don't want to log the ident value from PC's, but you do want to record it when the connection is from the Unix machine. Here is an example acl set that does this: If a system cracker is attempting to attack your cache, it can be useful to have their ident value logged. The following example gets Squid not to do ident lookups for machines that are allowed access, but if a request comes from a disallowed IP range, an ident lookup is done and inserted into the log. The miss_access acl-operator The ICP protocol is used by many caches to find out if objects are in another cache's on-disk store. If you are peering with other organisation's caches, you may wish them to treat you as a sibling, where they only get data that you already have stored on disk. If an unscrupulous cache-admin were to change their cache_peer line to read parent instead of sibling, they could get you to retrieve objects on their behalf. To stop this from happening, you can create an acl that contains the peering caches, and use the miss_access acl-operator to ensure that only hits are served to these caches. In response to all other requests, an access-denied message is sent (so if a sibling complains that they almost always get error messages, it's likely that they think that you should be their parent, and you think that they should be treating you as a sibling.) When looking at the following example it is important to realise that http_access lines are checked before any miss_access lines. If the request is denied by the http_access lines, an error page is returned and the connection closed, so miss_access lines are never checked. This means that the last miss_access line in the example doesn't allow random IP ranges to access your cache, it only allows ranges that have passed the http_access test through. This is simpler than having one miss_access line for each http_access line in the file, and it will reduce CPU usage too, since only two acls are checked instead of the six we would have instead. 412 The always_direct and never_direct acl-operators These operators help you make controlled decisions about which servers to connect to directly, and which to connect through a parent cache/proxy. I previously discussed this set of options briefly in Chapter 3, during the Basic Installation phase. These tags are covered in detail in the following chapter, in the Peer Selection section. The broken_posts acl-operator Some servers incorrectly handle POST data, requiring an extra Carriage-Return (CR) and Line-Feed (LF) after a POST request. Since obeying the HTTP specification will make Squid incompatible with these servers, there is an option to be non-compliant when talking to a specific set of servers. This option should be very rarely used. The url_regex acl type should be used for specifying the broken server. SNMP Configuration Before we continue: if you wish to use Squid's SNMP functions, you will need to have configured Squid with the --enable-snmp option, as discussed way back in Chapter 2. The Squid source only includes SNMP code if it is compiled with the correct options. Normally a Unix SNMP server (also called an agent) collects data from the various services running on a machine, returning information about the number of users logged in, the number of sendmail processes running and so forth. As of this writing, there is no SNMP server which gathers Squid statistics and makes them available to SNMP managment stations for interpretation. Code has thus been added to Squid to handle SNMP queries directly. Squid normally listens for incoming SNMP requests on port 3401. The standard SNMP port is 161. For the moment I am going to assume that your management station can collect SNMP data from a port other than 161. Squid will thus listen on port 3401, where it will not interfere with any other SNMP agents running on the machine. No specific SNMP agent or mangement station software is covered by this text. A Squid-specific mib.txt file is included in the /usr/local/squid/etc/ directory. Most 413 management station software should be able to use this file to construct Squidspecific queries. Querying the Squid SNMP server on port 3401 All snmp_access acl-operators are checked when Squid is queried by an SNMP management station. The default squid.conf file allows SNMP queries from any machine, which is probably not what you want. Generally you will want only one machine to be able to do SNMP queries of your cache. Some SNMP information is confidential, and you don't want random people to poke around your cache settings. To restrict access, simply create a src acl for the appropriate IP address, and use snmp_access to deny access for every other IP. Not all Squid SNMP information is confidential. If you want to allow split up SNMP information into public and private, you can use an SNMP-specific acl type to allow or deny requests based on the community the client has requested. Running multiple SNMP servers on a cache machine If you are running multiple SNMP servers on your cache machine, you probably want to see all the SNMP data returned on one set of graphs or summaries. You don't want to have to query two SNMP servers on the same machine, since many SNMP analysis tools will not allow you to relate (for example) load average to number of requests per second when the SNMP data comes from more than one source. Let's work through the steps Squid goes through when it receives an SNMP query: The request is accepted, and access-control lists are checked. If the request is allowed, Squid checks to see if it's a request for Squid information or a request for something it doesn't understand. Squid handles all Squid-specific queries internally, but all other SNMP requests are simply passed to the other SNMP server; Squid essentially acts as an SNMP proxy for SNMP queries it doesn't understand. This SNMP proxy-mode allows you to run two servers on a machine, but query them both on the same port. In this mode Squid will normally listen on port 161, and the other SNMP server is configured to listen on another port (let's use port 3456 for argument's sake). This way the client software doesn't have to be configured to query a different port, which especially helps when the client is not under your control. 414 Binding the SNMP server to a non-standard port Getting your SNMP server to listen on a different port may be as easy as changing one line in a config file. In the worst case, though, you may have to trick it to listen somewhere else. This section is a bit of a guide to IP server trickery! Server software can either listen for connections on a hard-coded port (where the port to listen to is coded into the source and placed directly into the binary on compilation time), or it can use standard system calls to find the port that it should be listening to. Changing programs that use the second set of options to use a different port is easy: you edit the /etc/services file, changing the value for the appropriate port there. If this doesn't work, it probably means that your program uses hard-coded values, and your only recourse is to recompile from source (if you have it) or speak to your vendor. You can check that your server is listening to the new port by checking the output of the netstat command. The following command should show you if some process is listening for UDP data on port 3456: cache1:~ $ netstat -na | grep udp | grep 3456 udp 0 0 0.0.0.0:3456 0.0.0.0:* cache1:~ $ Changing the services port does have implications: client programs (like any SNMP management station software running on the machine) will also use the services file to find out which port they should connect when forming outgoing requests. If you are running anything other than a simple SNMP agent on the cache machine, you must not change the /etc/services file: if you do you will encounter all sorts of strange problems! Squid doesn't use the /etc/services file, but the port to listen to is stored in the standard Squid config file. Once the other server is listening on port 3456, we need to get Squid to listen on the standard SNMP port and proxy requests to port 3456. First, change the snmp_port value in squid.conf to 161. Since we are forwarding requests to another SNMP server, we also need to set forward_snmpd_port to our other-server port, port 3456. Access Control with more than one Agent Since Squid is actually creating all the queries that reach the second SNMP server, using an IP-based access control system in the second server's config is useless: 415 all requests will come from localhost. Since the second server cannot find out where the requests came from originally, Squid will have to take over the access control functions that were handled by the other server. For the first example, let's assume that you have a single SNMP management station, and you want this machine to have access to all SNMP functions. Here we assume that the management station is at IP 10.0.0.2. You may have classes of SNMP stations too: you may wish some machines to be able to inspect public data, but others are to be considered completely trusted. The special snmp_community acl type is used to filter requests by destination community. In the following example all local machines are able to get data in the public SNMP community, but only the snmpManager machine is able to get other information. In this example we are using the ANDing of the publicCommunity and myNet acls to ensure that only people on the local network can get even public information. Delay Classes Delay Classes are generally used in places where bandwidth is expensive. They let you slow down access to specific sites (so that other downloads can happen at a reasonable rate), and they allow you to stop a small number of users from using all your bandwidth (at the expense of those just trying to use the Internet for work). To ensure that some bandwidth is available for work-related downloads, you can use delay-pools. By classifying downloads into segments, and then allocating these segments a certain amount of bandwidth (in kilobytes per second), your link can remain uncongested for "useful" traffic. To use delay-pools you need to have compiled Squid with the appropriate options: you will have to have used the --enable-delay-pools option when running the configure program back in Chapter 2. Slowing down access to specific URLs An acl-operator (delay_access) is used to split requests into pools. Since we are using acls, you can split up requests by source address, destination url or more. There is more than one type (or class) of pool. Each type of pool allows you to limit bandwidth in different ways. 416 The First Pool Class Rather than cover all of the available classes immediately, let's deal with a basic example first. In this example we have only one pool, and the pool catches all URLs containing the word abracadabra. acl magic_words url_regex -i abracadabra delay_pool_count 1 delay_class 1 1 delay_parameters 1 16000/16000 delay_access 1 allow magic_words The first line is a standard ACL: it returns true if the requested URL has the word abracadabra in it. The -i flag is used to make the search case-insensitive. The delay_pool_count variable tells Squid how many delay pools there will be. Here we have only one pool, so this option is set to 1. The third line creates a delay pool (delay pool number 1, the first option) of class 1 (the second option to delay_class). The first delay class is the simplest: the download rate of all connections in the class are added together, and Squid keeps this aggregate value below a given maximum value. The fourth line is the most complex, as if you can see. The delay_parameters option allows you to set speed limits on each pool. The first option is the pool to be manipulated: since we have only one pool in this example, this is set to 1. The second option consists of two values: the restore and max values, separated by a forward-slash (/). If you download a short file at high speed, you create a so-called burst of traffic. Generally these short bursts of traffic are not a problem: these are normally html or text files, which are not the real bandwidth consumers. Since we don't want to slow everyone's access down (just the people downloading comparitively large files), Squid allows you to configure a size that the download is to start slowing down at. If you download a short file, it arrives at full speed, but when you hit a certain threshold the file arrives more slowly. The restore value is used to set the download speed, and the max value lets you set the size at which the files are to be slowed down from. Restore is in bytes per second, max is in bytes. 417 In the above example, downloads proceed at full speed until they have downloaded 16000 bytes. This limit ensures that small file arrive reasonably fast. Once this much data has been transferred, however, the transfer rate is slowed to 16000 bytes per second. At 8 bits per byte this means that connections are limited to 128kilobits per second (16000 * 8). The Second Pool Class As I discussed in this section's introduction, delay pools can help you stop one user from flooding your links with downloads. You could place each user in their own pool, and then set limits on a per-user basis, but administrating these lists would become painful almost immediately. By using a different pool type, you can set rate limits by IP address easily. Let's consider another example: you have a 128kbit per second line. Since you want some bandwidth available for things like SMTP, you want to limit web access to 100kbit per second. At the same time, you don't want a single user to use more than their fair share of sustained bandwidth. Given that you have 20 staff members, and 100kbit per second remaining bandwidth, each person should not use more than 5kbit per second of bandwidth. Since it's unlikely that every user will be surfing at once, we can probably limit people to about four times their limit (that's 20kbit per second, or 2.5kbytes per second). In the following example, we change the delay class for pool 1 to 2. Delay class 2 allows us to specify both an aggregate (overall) bandwidth usage and a per-user usage. In the previous example the delay_paramaters tag only took one set of options, the aggregate peak and burst rates. Given that we are now using a classtwo pool, we have to supply two sets of options to delay_parameters: the overall speed and the per-IP speed. The 100kbits per second value is converted to bytes per second by dividing by 8 (giving us the 12500 values), and the per-IP value of 2.5kbits per second we discovered is converted to bytes per second (giving us the 2500 values.) EXAMPLE acl all src 0.0.0.0/0.0.0.0 delay_pool_count 1 delay_class 1 2 delay_parameters 1 12500/12500 2500/2500 418 delay_access 1 allow all The Third Pool Class This class is useful to very organizations like Universities. The second pool class lets you stop individual users from flooding your links. A lab full of students all operating at their maximum download rate can, however, still flood the link. Since such a lab (or department, if you are not at a University) will all have IP addresses in the same range, it is useful to be able to put a cap on the download rate of an entire network range. The third pool class lets you do this. Currently this option only works on class-C network ranges, so if you are using variable length subnet masks then this will not help. In the next example we assume that you have three IP ranges. Each range must not use more than 1/3 of your available bandwidth. For this example I am assuming that you have a 512kbit/s line, and you want 64kbit/s available for SMTP and other protocols. This will leave you with an overall download rate cap of 448kbit/s.) Each Class-C IP range will have about 150kbit/s available. With 3 ranges of 256 IP addresses each, you should have in the region of 500 pc's, which (if calculated exactly) gives you .669kbit per second per machine. Since it is unlikely that all machines will be using the net at the same time, you can probably allocate each machine (say) 4kbit per second (a mere 500 bytes per second). In this example, we changed the delay class of the pool to 3. The delay_parameters option now takes four arguments: the pool number; the overall bandwidth rate; the per-network bandwidth rate and the per-user bandwidth rate. The 4kbit per second limit for users seems a little low. You can increase the peruser limit, but you may find that it's a better idea to change the max value instead, so that the limit sets in after only (say) 16kilobytes or so. This will allow small pages to be downloaded as fast as possible, but large pages will be brought down without influencing other users. If you want, you can set the per-user limit to something quite high, or even set them to -1, which effectively means that there is no limit. Limits work from right to left, so if I user is sitting alone in a lab they will be limited by their per-user speed. If this value is undefined, they are limited by their per-network speed, and if that is undefined then they are limited by their overall speed. This means that you can set the per-user limit higher than you would expect: if the lab is not busy then they will get good download rates (since they are only limited by the pernetwork limit). 419 EXAMPLE: acl all src 0.0.0.0/0.0.0.0 delay_pool_count 1 delay_class 1 3 1. 56000*8 sets your overall limit at 448kbit/s 1. 18750*8 sets your per-network limit at 150kbit/s 1. 500*8 sets your per-user limit at 4kbit/s delay_parameters 1 56000/56000 18750/18750 500/500 delay_access 1 allow all Using Delay Pools in Real Life By combining multiple ACLs, you can do interesting things with delay pools. Here are some examples: By using time-based acls, you can limit people's speed during working hours, but allow them full-speed access outside hours. Again (with time-based acl lists), you can allocate a very small amount of bandwidth to http during working hours, discouraging people from browsing the Web during office hours. By using acls that match specific source IP addresses, you can ensure that sibling caches have full-speed access to your cache. You can prioritize access to a limited set of destination sites by using the dst or dstdomain acl types by inverting the rules we used to slow access to some sites down. You can combine username/password access-lists and speed-limits. You can, for example. allow users that have not logged into the cache access to the Internet, but at a much slower speed than users who have logged in. Users that are logged in get access to dedicated bandwidth, but are charged for their downloads. 420 Conclusion Once your acl system is correctly set up, your cache should essentially be ready to become a functional part of your infrastructure. If you are going to use some of the advanced Squid features (like transparent operation mode, for example), (? Unintended truncation? ?) Retrieved from "http://www.deckle.co.za/squid-usersguide/Access_Control_and_Access_Control_Operators" Views Page Discussion View source History Personal tools Log in / create account Navigation Main Page Community portal Current events Recent changes Random page Help Search Special:Search Toolbox What links here Related changes Go Search 421 Special pages Printable version Permanent link This page was last modified on 20 December 2009, at 22:49. This page has been accessed 104,640 times. Content is available under GNU Free Documentation License 1.2. Privacy policy About Squid User's Guide Disclaimers Visit Jeremy's Blog. Blogs Recent Entries Best Entries Best Blogs Blog List Search Blogs Home Forums HCL Reviews Tutorials Articles Register Search Search Forums process Main Menu 1 1 Go guest Advanced Search Search Tags process 1 guest Go Linux Forums Search LQ Tags Linux HCL Linux 422 Tutorials Search LQ Wiki Go Search Search Tutorials/Articles Linux Wiki Go Search Search HCL all Go 1 Distro Reviews Book Reviews Search Reviews all Go 1 Search Bookmarks Go all Go Go to Page... LinuxQuestions.org Linux Bookmarks Press Releases Linux Podcast Social Groups LQ Blogs LQ Radio LQ Radio Jukebox > Forums > Linux Forums > Linux - User Name Security Password how to block all the IM -- skype, googletalk, msn, yahoo, ICQ User Name guest Remember Me? Log in Home (Con't) login My LQ Linux - Security This forum is for all security related questions. Questions, tips, system compromises, firewalls, etc. are all included here. Download Linux Search ISOs Notices LQ Job Marketplace Login Register Write for LQ 423 LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know. Welcome to LinuxQuestions.org, a friendly and active Linux Community. You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today! Note that registered members see fewer ads, and ContentLink is completely disabled once you log in. Are you new to LinuxQuestions.org? Visit the following links: Site Howto | Site FAQ | Sitemap | Register Now Free Publications Uncensored Usenet Service If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here. 574 days binary retention, Unlimited Having a problem logging in? Please visit this page to clear all LQ-related cookies. Access, 99%+ Free Guide: Linux from Scratch Completion! Signup Linux from Scratch describes the process of creating your own Linux system from scratch from an already installed Linux distribution, using nothing but the source code of software that you need. account and try for for Giganews' no yourself. Click here for 50% off at Giganews. This 318 page eBook provides readers with the background and instruction to design and build custom Linux systems. The resulting system will be compiled completely from the source code, and the user will be able to specify where, why, and how programs are installed. This eBook allows readers to fully customize Linux systems to their own needs and allows users more control over their system. Click Here to receive the Linux from Scratch Guide absolutely free. LQ Rules LQ Sitemap Page 1 of 2 1 2 > Site FAQ Search this Thread Linux Links #1 09-16-2005, 02:08 AM cksoo obligation free trial how to block all the IM -- skype, googletalk, msn, yahoo, ICQ Main Menu LQ Calendar View New Posts View Latest 424 Posts LQ Newbie Registered: Sep 2005 Posts: 19 [Log in to get rid of this advertisement] Hi, Thanked: 0 May I know how to total block all the IM using Iptables and squid because my company new policy want me to block all the IM. For time being i just able to block yahoo and ICQ using iptables and msn using squid. but I unable to block skype and google talk. Zero Reply Threads LQ Wiki Most Wanted hope some one can help me to solve it or point me the useful link. Report LQ Bug Syndicate thanks. Latest Threads LQ News LQ Podcast LQ Radio cksoo View Public Profile Twitter: View LQ Blog @linuxquestions identi.ca: View Bookmarks @linuxquestions View Review Entries Facebook: @linuxquestions View HCL Entries Find More Posts by cksoo 09-16-2005, 03:53 AM reddazz Guru Registered: Nov 2003 Location: N. E. England Distribution: Fedora, CentOS, Debian Posts: 16,291 Thanked: 16 reddazz Jeremy's Blog I am not a networking guru but I think you need to find out which ports they use and block those ports. #2 425 View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries View LQ Wiki Contributions Find More Posts by reddazz 09-16-2005, 09:46 AM craigevil Senior Member If you can use Guarddog it has separate listings for AIM, YAhoo, MSN, ICQ,IRC, and netmeeting. Registered: Apr 2005 Aim uses Destination Port: 5190-5193 Location: down the rabbit hole Distribution: Debian Sid Posts: 3,568 Blog Entries: 9 Thanked: 236 YAhoo: Name: Yahoo! Messenger Description: Yahoo! instant messager. Security Risk: Low Network Usage: Description: TCP connection from client to server. Name: Login to network Source Port: dynamic Destination Port: 5050, 23 Description: TCP connection from client to server. Name: Conference Source Port: dynamic Destination Port: 5000-5001 Description: UDP connection from client to server. Name: Conference Source Port: dynamic Destination Port: 5000 MSN:Destination Port: 1863 ICQ: Description: Bidirectional UDP connection from client to #3 426 server. Source Port: any Destination Port: 4000 Description: TCP connection from client to client. Source Port: nonprivileged Destination Port: nonprivileged Jabber/Gtalk: Description: TCP connection from client to server. Source Port: dynamic Destination Port: 5222 Description: TCP connection from client to server. Name: Jabber over Secure Socket Layer Source Port: dynamic Destination Port: 5223 Sorry I do not have Skype installed. Their documentation should tell you what ports to block. craigevil View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries View LQ Wiki Contributions Visit craigevil's homepage! Find More Posts by craigevil View Blog 09-19-2005, 12:31 AM ckamheng Member Registered: Apr #4 FYI, what i noticed that now all the IM are using the random port already so quick difficult to block them also. Cause I try to block all IM with the port no that list about but the user still can use the IM. 427 2003 Location: Malaysia Distribution: Slackware 10.2 Posts: 74 Thanked: 0 ckamheng View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by ckamheng 09-19-2005, 07:21 AM win32sux #5 Moderator wouldn't an application-level proxy be a more effective way to block these things?? Registered: Jul 2003 http://www.balabit.com/products/zorp/ Location: Los Angeles, CA Distribution: Ubuntu Posts: 9,181 Thanked: 217 or maybe there's an add-on to iptables for IMs, kinda like the p2pwall project but for IMs instead of P2Ps?? http://www.lowth.com/p2pwall/ win32sux View Public Profile View LQ Blog View Bookmarks View Review Entries 428 View HCL Entries Find More Posts by win32sux 09-20-2005, 07:19 PM cksoo #6 I use very stupid way to block the IM that I install all the IM and monitor it where and which IP they logon to then i block IP that they login to prevent user to use the IM. LQ Newbie Registered: Sep 2005 Posts: 19 Thanked: 0 Original Poster Unfortunery, I still cant block the user using external Proxy server to logon to IM server. Any one got an idea on this. Can this be done by using the iptables which can block the internal user using an external Proxy server? cksoo View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by cksoo 09-20-2005, 08:58 PM win32sux Moderator Registered: Jul 2003 #7 Quote: Originally posted by cksoo I use very stupid way to block the IM that I install all the IM and monitor it where and which IP they logon to then i block IP that they login to prevent user to use the IM. Location: Los Angeles, CA Distribution: Ubuntu Posts: 9,181 Thanked: 217 Unfortunery, I still cant block the user using external Proxy server to logon to IM server. Any one got an idea on this. Can this be done by using the iptables which can block the internal user using an external Proxy server? yes, if you know the IP of the proxy server it would be easy to block it with iptables... 429 Last edited by win32sux; 09-20-2005 at 10:57 PM.. win32sux View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by win32sux 09-21-2005, 04:54 AM cksoo #8 The problem is there are a lot open proxy offer so quite difficult to block. May I know whether got a general iptables rules that force my internal must use my internal proxy server or not ? LQ Newbie Registered: Sep 2005 Posts: 19 Thanked: 0 Original Poster cksoo View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by cksoo 04-11-2006, 05:35 AM logu #9 You can have proxy and allow your users to have access just 430 LQ Newbie to the proxy port and deny all others. Registered: May 2004 Posts: 2 Thanked: 0 logu View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by logu 05-23-2007, 02:54 AM tuxchetan LQ Newbie Registered: May 2007 Posts: 3 Thanked: 0 #10 To disable GTalk... Setup these rules in your IPTables. or create ACLs in Squid. Drop If destination is 72.14.253.125 Drop If destination is 72.14.255.100 Drop If destination is 209.85.139.83 Drop If destination is 66.249.89.99 Drop If destination is 64.233.163.189 Drop If destination is 209.85.137.125 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 80 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 5222 is 66.249.89.103 and is 209.85.137.125 and is 209.85.147.83 and is 216.239.51.125 and is 209.85.163.125 and is 209.85.163.125 and 431 Drop If protocol destination port Drop If protocol destination port Drop If protocol destination port is is is is is is TCP and destination is 216.239.51.125 and 443 TCP and destination is 216.239.51.125 and 5222 TCP and destination is 72.14.253.125 and 443 +chetan tuxchetan View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by tuxchetan 05-24-2007, 02:27 AM logu LQ Newbie Registered: May 2004 #11 Quote: Originally Posted by tuxchetan To disable GTalk... Setup these rules in your IPTables. or create ACLs in Squid. Posts: 2 Thanked: 0 Drop If destination is 72.14.253.125 Drop If destination is 72.14.255.100 Drop If destination is 209.85.139.83 Drop If destination is 66.249.89.99 Drop If destination is 64.233.163.189 Drop If destination is 209.85.137.125 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 80 Drop If protocol is TCP and destination is 66.249.89.103 and is 209.85.137.125 and is 209.85.147.83 and is 216.239.51.125 and 432 destination port Drop If protocol destination port Drop If protocol destination port Drop If protocol destination port Drop If protocol destination port Drop If protocol destination port is is is is is is is is is is is 443 TCP and 443 TCP and 5222 TCP and 443 TCP and 5222 TCP and 443 destination is 209.85.163.125 and destination is 209.85.163.125 and destination is 216.239.51.125 and destination is 216.239.51.125 and destination is 72.14.253.125 and +chetan Blocking IMs based on IPs doent seem to be a good idea as the clients use the fqdn to connect and the corresponding IP keeps changing. Better way is to block them using the fqdn (talk.google.com) and keep IPtables rules updating it using cron jobs. Thanks -logu logu View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by logu 05-24-2007, 08:46 AM gloomy #12 Member In my opinion the best filtering project at the application layer: Registered: Jan 2006 http://l7-filter.sourceforge.net/ Location: Finland 433 Distribution: Mainly Gentoo Posts: 119 Thanked: 0 gloomy View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by gloomy 05-24-2007, 11:56 PM #13 tuxchetan LQ Newbie Quote: Registered: May 2007 Posts: 3 Thanked: 0 Originally Posted by logu Better way is to block them using the fqdn (talk.google.com) Yes, you are right. We have to keep watch if host/IP change. But if GTalk seems talk.google.com host down (that's what if we block it), seems that it tried those hosts and connection successfull. Then, if you try to block port 5222/3 of Jabber, next it make an conn. attempt to those hosts at 443 or 80. I'm using these iptables rules from past 3 months, and keeps blocking. BTW, try to block these sites.... for Web IM http://www.iloveim.com/ http://www.meebo.com ( alternatives- http://www.meebo.us or http://www.meebo.biz ) http://www.imunitive.com (alternativeshttp://www.imunitive.co.uk ) http://www.imhaha.com ( alternatives- http://www.imhaha.net) http://www.e-buddy.com (alternatives - http://www.e-buddy.us) 434 http://www.koolim.com (alternatives- http://www.koolim.us) http://www.goowy.com -( alternatives- http://www.goowy.us http://www.goowy.info , http://www.goowy.biz) http://www.mabber.com (alternatives- http://www.mabber.us) http://www.wablet.com - ( alternatives- http://www.wablet.us ) http://www.easymessenger.net/ http://www.pinkprank.com http://www.ebuddy.com/ Last edited by tuxchetan; 05-24-2007 at 11:58 PM.. tuxchetan View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by tuxchetan 05-25-2007, 04:35 AM slimm609 Member Registered: May 2007 Location: Chas, SC Distribution: slackware, gentoo, fedora, LFS, sidewinder G2, solaris, FreeBSD, RHEL, SUSE, Backtrack Posts: 365 Thanked: 28 slimm609 View Public Profile #14 as for skype that is the hardest application to block. The only way that i have seen to block skype is to do a packet matching with CISCO MARS systems. I use sidewinder firewalls at work and we cant even block skype on those. 435 View LQ Blog View Bookmarks View Review Entries View HCL Entries Find More Posts by slimm609 07-02-2007, 01:33 AM rkiran32 #15 Hi Chetan, can u pls guide me how to add the below lines in my LQ Newbie (rc.firewall.up) Registered: Jul 2007 Hi Chetan, can u pls guide me how to add the below lines in my rc.firewall.up)file. I don't know where to add these lines & as per my knowledge it should come like this..... for eg: # drop hits from Google Talk /sbin/iptables -A INPUT -p TCP -i $RED_DEV --dport 5222 -j DROP /sbin/iptables -A INPUT -p TCP -i $RED_DEV --dport 5223 -j DROP /sbin/iptables -A INPUT -p TCP -i $RED_DEV --dport 5224 -j DROP if I am right. I m waiting for your earliest reply. Posts: 1 Thanked: 0 I m using smoothwall 2.0. I also want to learn more about blocking IP Addresses & the Ports, if U can help me it wud be gr8 for me. You can reply me in [email protected] Thanks Kiran Quote: Originally Posted by tuxchetan 436 To disable GTalk... Setup these rules in your IPTables. or create ACLs in Squid. Drop If destination is 72.14.253.125 Drop If destination is 72.14.255.100 Drop If destination is 209.85.139.83 Drop If destination is 66.249.89.99 Drop If destination is 64.233.163.189 Drop If destination is 209.85.137.125 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 80 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 5222 Drop If protocol is TCP and destination destination port is 443 Drop If protocol is TCP and destination destination port is 5222 Drop If protocol is TCP and destination destination port is 443 +chetan rkiran32 View Public Profile View LQ Blog View Bookmarks View Review Entries View HCL Entries is 66.249.89.103 and is 209.85.137.125 and is 209.85.147.83 and is 216.239.51.125 and is 209.85.163.125 and is 209.85.163.125 and is 216.239.51.125 and is 216.239.51.125 and is 72.14.253.125 and 437 Find More Posts by rkiran32 Page 1 of 2 1 2 > Thread Tools Show Printable Version Email this Page Search this Thread guest process Go 363844 Advanced Search Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is Off HTML code is Off Trackbacks are Off Pingbacks are On Refbacks are Off Forum Rules Similar Threads Thread Thread Starter Forum Replies Last Post 438 Block MSN, YM, ICQ.... All..... gabriellai Linux - Networking 2 ICQ/MSN/IRC all-in-one for Linux? Cyberian Linux - Software 4 Yahoo Messenger or ICQ on FreeBSD? selvyn Linux - Software 11 pollymorf Linux - Newbie 1 Albinus Linux - Networking 1 MSN AND ICQ in one with filtransfer support SmoothWall + ICQ & MSN Messenger 04-05-2005 05:21 PM 07-03-2004 08:52 PM 03-03-2004 10:57 AM 09-24-2003 08:33 AM 08-24-2001 10:34 PM All times are GMT -5. The time now is 08:02 AM. Contact Us - Advertising Info - Rules - LQ Merchandise - Donations - Contributing Member - LQ Sitemap - LinuxQuestions.org - Linux Forums Open Source Consulting | Domain Registration Linux Poison Home Posts RSS 439 Comments RSS Edit Block mp3, mpg, mpeg, exe files using Squid proxy server Posted by Nikesh Jauhari First open squid.conf file /etc/squid/squid.conf: # vi /etc/squid/squid.conf Now add following lines to your squid ACL section: acl blockfiles urlpath_regex “/etc/squid/multimedia.files.acl” Now create the the file # vi /etc/squid/multimedia.files.acl \.[Ee][Xx][Ee]$ \.[Aa][Vv][Ii]$ \.[Mm][Pp][Gg]$ \.[Mm][Pp][Ee][Gg]$ \.[Mm][Pp]3$ Save and close the file and Restart Squid: # /etc/init.d/squid restart 440 1 comments: November 5, 2007 1:19 PM smileface said... great tutorial, thank you before. I do same like you done and it work as well for *.exe files but we can still download *.mp3 files, could you help me about it? Post a Comment Links to this post Create a Link Newer Post Older Post Submit Blog Archive ► 2010 (149) o ► August (19) Creating Fancy and Stylish Screenshots with Screen... How to free Linux Kernel page cache and/or inode a... How to Make Windows Faster than Linux 441 o Real-time Bandwidth monitoring tool - Bmon Ubuntu Naming Convention Graphical network connections viewer for Linux - N... Linux kernel error levels (Value, Name and Meaning... Check your Ubuntu for Non-free packages - vrms (Vi... How to Securely Wipe a Hard Drive - Darik's Boot a... Modify Images in Linux - Mogrify How to enable Autologin to Linux console using min... Lightweight Monitoring Tool for Servers and Embedd... Sending Mail through command line on Linux Linux Filesystem Benchmark using Blogbench Top like utility to monitor Network - sntop Linux Announcement from Linus Torvalds .... 1991 Disk Information Utility - di Gentoo lost and Fedora is losing to OpenSuSe Get all the required Process Information and Stati... ► July (21) High Speed Network Authentication Cracking Tool - ... How To Extract data from .deb file in Linux How to Compress / Uncompress files using bzip2 in ... Penetration Testing Tool box - PenTBox 442 o Convert Flv to Mp3 in Linux - FlvToMp3 Hostnames and Virtual Hosts Discovery tool - Hostm... Check for security configuration issue on software... IDS/IPS/WAF Evasion & Flooding Tool - Inundator Detection & Exploitation Of SQL Injection Flaws - ... How to enable MP3, MPEG-4, AVI, DiVX, etc. in Open... Malware Analysis Linux OS - REMnux Open Source Web Application Security Assessment To... Large Text File (logs) viewer - Rowscope How to Enable / Disable Modules into Apache on Lin... CentOS is now the most popular Linux distribution ... How to get Technical and Tag information about a v... How to Lock / UnLock (Enable / Disable) Linux User... Merge or Encrypt / Decrypt PDF files using pdftk Install Group of Sofware on Ubuntu using Tasksel Search News, Bug fixes, Tips and Tricks, etc. for ... Tweet from command line using curl ► June (13) openSUSE 11.3 Countdown - Get your Counter How to access / mount Windows shares from Linux How To Convert VMWare Image (.vmdk) to VirtualBox ... 443 Convert Linux man pages to PDF files Real-time Squid proxy server log on Web Browser - ... lynis - Security and System auditing tool for Linu... cppcheck - A tool for static C / C++ code analysis... Download entire website using Wget for offline vie... Installing Linux using VNC Multimedia (MP3, MPEG-4, AVI, DiVX, etc.) support ... o ► May (22) o ► April (19) o ► March (16) o ► February (15) o ► January (24) ► 2009 (269) o ► December (14) o ► November (20) o ► October (29) o ► September (16) o ► August (22) o ► July (19) o ► June (28) o ► May (20) 444 o ► April (27) o ► March (31) o ► February (15) o ► January (28) ► 2008 (583) o ► December (23) o ► November (37) o ► October (43) o ► September (46) o ► August (34) o ► July (58) o ► June (62) o ► May (63) o ► April (66) o ► March (56) o ► February (56) o ► January (39) ▼ 2007 (68) o ► December (15) o ► November (15) o ▼ October (38) 445 How To Make ISO image from CD How to recover damaged Superblock How to crash Linux? Create Linux Filesystem From An Ordinary File Allow normal user to mount cdrom Mounting an ISO Image as a Filesystem How to Use MD5 Repair a Corrupt MBR and boot into Linux (fedora) Creating the smbpasswd file from /etc/passwd file HowTo Create a self-signed SSL Certificate for Apa... Virtual Hosting using Apache Scan vulnerability by using Nessus Repair Corrupt RPM Database Erase the Content of Disk Drive Software Testing FAQ Apache authentication using pam Configure Squid to use other Proxy (cache) Block Ads by using squid and Ad Zapper Block mp3, mpg, mpeg, exe files using Squid proxy ... Lock User Accounts After Too Many Login Failures How to scan a host 446 How to use SPAM Blacklists (Public) With Sendmail How to use procmail + spamassassin Setup Quotas Recover lost root password The right way to ask queries in a Discussion Forum... What to do if Linux refuses to boot after a power ... How to configure ntp Installing Xfce on Ubuntu How to disable CTRL-ALT-DEL from rebooting a Linux... Squid Password Authentication Using PAM Tune your ext3 filesystem Mount Samba share using fstab Install Extra Applications in Fedora How to enable the root account in Ubuntu It’s time for OpenSuse 10.3, … RIP Windows Vista Install MP3 Support in Fedora 7 Multimedia support in OpenSuse 10.3 (MP3, DiVX, et... 447 Copyright 2010 | [email protected] Disclaimer: If you use these tips, the author is not responsible for any adverse effects to your platform. Use these tips at your own risk. Do your own due diligence! These tips are intended as general gui delines and may vary from platform to platform. These tips are not the opinion of any vendor, and the author IS NOT associated/endorsed by any vendor. The author does not endorse any vendor. thebestlinks.com close get notified when page is updated your email get updates How to install VMWare Server on Linux Redhat Linux, Virtualization March 7th, 2008 Introduction Nowadays, virtualization is a solution which is interested by many IT manager. By converting existing system to virtual, you can save lot of money from buying 448 new hardware every year, avoid hardware conflicts when you move virtual to another computer, etc. But when you convert system to virtual, you still need OS to run virtual software (VMWare, Virtual PC, etc). The best practice is to have Linux operationg system running on host so that you won’t have to pay an additional license. Then, you can have whatever OS you want on virtual. For virtual software, VMWare is the one I recommend to try. There are some free licenses if you want to try. Like VMWare Server, you can have many virtual on a PC and you can connect to manage your virtuals remotely by using VMWare Server Console. This article shows how to install VMWare Server on Redhat Enterprise 4. Step-by-step 1. Install prerequisite program on Redhat. To install VMWare Server on Linux, you need to install gcc compiler and xinetd before install VMWare Server. By default, xinetd is already installed on Redhat Enterprise 4 so you only need to install gcc. o To install gcc, click Application -> System Settings -> Add/Remove Applications 449 o Browse to ‘Development’ section and check ‘Development Tools’. Click update. o This will show what are going to install. Note: To install this, you may requires Redhat installation CD (disc 3). 2. Now download VMWare Server for Linux from vmware.com. Also, you need to register for a free serial number. When writing this article, the latest version is 1.0.4. I have copied the setup file to my desktop. In this example, I logged in as ‘root’ for installation and configuration VMWare 450 Server. 3. Open terminal, extract the zipped file by type ‘tar xvfz “your-filename.tar.gz”‘. tar xvfz VMWare-server-1.0.4-56528.tar.gz 4. When extract finishes, you’ll see the VMWare Server on desktop or at the same place with your zipped file. 5. Run the installation file. Open Terminal and change directory to the extracted folder and execute vmware-install.pl. cd vmware-server-distrib/ ./vmware-install.pl 451 6. This is the steps for installation and configuration VMWare Server. If you don’t know what value to enter, you can simply press Enter button to accept the default value in the bracket [ ] which is provided by the installation file. In this example, I install and configure as default value setting so I only press Enter on each question. o The installation file asks for paths to install files (1-7). o The installation file asks for path to install documentation (8-9). Also, it asks for configure VMWare Server now (10). If yes, press Enter to view End User License Agreement (10). 452 o Press Ctrl + C to exit the EULA and type ‘y’ or ‘yes’ to accept the EULA (11). o The configuration continue asks for path to install file (12-14) and whether to configure NAT network for VMWare Server now (1516). o The configuration asks for configure Host-only network for VMWare Server (19-20). 453 o The configuration asks for port which for remotely connection to this VMWare Server Console (22). o The configuration asks for path to keep virtual machine files (23) and serial number for VMWare Server (25). You can get one by register at vmware.com for free. If you don’t have now, you can enter this number later but it suggest you should enter it now. Otherwise, you need to re-run the config file again. o Whether you enter the serial number now or not, the configuration is finished. You’ll see VMWare service is starting. 454 7. Try open VMWare Server Console, select Applications -> System Tools > VMWare Server Console. 8. VMWare Server Console ask you to connect to which VMWare Server. Select Localhost and click connect. 9. That’s it. You can manage your virtual machine here. Share and Enjoy: 455 Related post Related posts: 1. How to setup Stand-Alone Kaspersky Anti-Virus 5.7 Workstation on Linux RedHat Introduction Kaspersky Anti-Virus is now one of the popular anti-virus softwares. The strong point are that it can detect and... 2. How to change IP Address on Linux Redhat Introduction Most of the time, I work in Windows environment. But I sometimes have to work on Linux platform, too.... Should be well versed in Active directory management like Backup/Restore. Should be aware of DNS Server configuration/management. 456 Group policy, AD replication troubleshooting. Candidate must have minimum 2 years of experience in System Admin Profile using Windows 2003 , Exchange 2003, Backup Administration. Flexible with 24*7 environment. Excellent communication and interpersonal skill is must. Configure Exchange 2003 Server Live Help Home > Support/FAQ > Configuring Exchange 2003 Main Menu Products ChangeSender o Requirements o Features o Download o Pricing o Ordering o Support o History POPcon o Requirements o Features 457 Overview POP3/IMAP Exchange/SMTP Connection/Dialup Schedule General Advanced o Download o Pricing o Ordering o Support o History POPcon PRO o Requirements o Features o Overview Antivirus Antispam Rules engine POP3/IMAP Exchange/SMTP Connection/Dialup Schedule General Advanced Download 458 o Pricing o Ordering o Support o History o Spamservers POPcon NOTES o Requirements o Features Overview Antivirus Antispam Rules engine POP3/IMAP Domino/SMTP Connection/Dialup Schedule General Advanced o Download o Pricing o Ordering o Support o History o Spamservers CSCatchAll o Features 459 o Requirements o Download o Pricing o Ordering o Support o History Freeware utilities Support o FAQ o Exchange 2010 setup o Exchange 2007 setup o Exchange 2003 setup o Exchange 2000 setup o Exchange 5.5 setup o Knowledge base Download Pricing Ordering News Contact Links 460 o Link Directory Sitemap Share | Configure Exchange 2003 Server [Please check out our related product Exchange POP3 connector POPcon. POPcon connects your MS Exchange Server to your POP3 mailboxes on the Internet. POPcon downloads emails from POP3 and IMAP mailboxes and distributes them to the appropriate Exchange Server users according to the recipient information found in the emails.] Exchange 2003 configuration Configuring your new Exchange 2003 server for internet email with POPcon for downloading the email from POP3 mailboxes isn't hard if you just do it step by step as shown in this configuration sample. In this guide we will step through a sample installation of Exchange 2003 for a company we will call "Mycompany". Mycompany consequently owns the internet domain name "mycompany.com". Actually it only takes these simple steps: 1. Adding your internet domain name to the recipient policies 2. Configuring the SMTP server for inbound email 3. Adding a SMTP Connector for outbound emails 4. Configuring the email addresses of your users 5. (Optional) Installing and configuring POPcon, Exchange POP3 Connector 6. (Optional) Check out the ChangeSender Exchange Send-as Outlook Add-in And this is how to configure the Exchange Server to accept email for mycompany.com and work with POPcon: 461 First install the software from CD. You may have to go back to the "Add/remove Software" utility in the control panel to add NNTP support if you did not do so during initial setup of your windows installation. Then open the Exchange System Manager and configure the new Exchange installation. 1. Adding your internet domain name to the recipient policies Open the Exchange System-Manager. It should look like this: One of the problems most often encountered when configuring an Exchange 2003 Server system is the fact that often the internet domain nane you want to receive email for ("mycompany.com") does not match your standard active directory domain name (i.e. "servername.mycompany.com"). The Exchange 2003 Server component handling incomming emails - the SMTP server - does not accept emails for other domains than the ones entered in the "recipient policies", even if you entered the correct email addresses ("[email protected]") in the active directory. To make Exchange accept email for additional domains like your internet domain you need to add the domain names to the default recipient policy like this: On the main tree panel of the exchange system manager expand the tree "Recipients" and then click on "Recipient Policies". The policies will be shown on the right panel. Normally only the "Default Policy" will be there: Open the properties of the "Default Policy" by double-clicking it: 462 In the Default Policy Properties please choose the tab "E-Mail Addresses". There you will find a list of domains supported by your exchange server. Usually only your internal active directory server domain will be listed here: Like you can see, after installing our Exchange Server from scratch only our AD domain "Christensen.local" was listed as accepted SMTP address. But emails from the internet will be comming in addressed to "@mycompany.com" and not Christensen.local! Choose "New..." here to add another accepted inbound domain. Since emails on the internet are sent via the SMTP protocol we want to add an "SMTP Address": Now enter the domain name you want to receive email for. Please add a leading "@" to the domain name. This is what we entered to support emails addressed to @mycompany.com: This is how the Default Policy Properties look like after entering the additional SMTP domain: Enable the newly created entry with a check mark next to it: 463 When you OK the above dialog, Exchange will ask you with the next dialog box if you want to add the new address to all new users. Usually you do want exactly that to save some typing later. Please note: You may need to restart your server to activate the new domain! 2. Configuring the SMTP server for inbound email Next we will configure the SMTP-Server. This is the part of Exchange that accepts incomming emails from POPcon. No special settings are needed to work with POPcon but these are the standard settings in any case: You will find the settings for the SMTP server under Servers/Protocols/SMTP/Default SMTP Virtual Server. Open the properties by right-clicking on the Default SMTP Virtual Server and choosing "Properties": The settings on tab "General" can normally be left to the defaults. On the tab "Access" you can find some configuration settings that might interfere with POPcon. POPcon only works with a standard SMTP connection WITHOUT authentication, so allow "Anonymous access" in the "Authentication" dialog: 464 Choose "Connection" to grant or refuse the right to connect to the SMTP server to individual or multiple IP Address Ranges. Please ensure the system POPcon runs on does have the right to connect granted. With this setting ALL systems will have access to your SMTP server: Under "Relay..." you can assign the right to relay through your SMTP-Server to some systems. This might be needed in some configuration and to be sure you should grant the system POPcon runs on relay rights. All other systems will need to authenticate before accessing the SMTP server to prevent unauthorized users using your system to relay spam: Under the "Messages" tab you can restrict message size and number of messages accepted for each connection. Please make sure these settings are liberal enough to allow POPcon to transmit large messages to your server. Also, on this tab you can choose an internal additional recipient for copies of the non-delivery reports. These NDRs will be sent back to senders of mails addressed to recipients unknown in your Exchange Server and they include a copy of the original message sent. You can use these postmaster copies of the NDRs to manually forward emails sent to mistyped recipients to the correct users. 465 Under tab "Delivery" some more configuration settings for outgoing emails can be found: 3. Adding the SMTP Connector for outbound emails Now we need to add an SMTP-Connector (vs. SMTP Server) to handle outgoing email to the Internet. Right-click "Connectors" in the Exchange System Manager and choose "New", "SMTP-Connector" to start adding the new connector and name it appropriately (like "SMTP-Out" in our case): On the "General" tab you can now choose wether Exchange will send outgoing emails directly to the recipients system ("Use DNS...") or if all emails should be relayes through a SMTP relay server ("smart host"). The first option, DNS, is more direct but can sometimes cause problems when you use a dialup internet connection because some recipient systems will not accept emails that are coming from you ISP's dialup IP range while pretending to come from your real internet domain. Sending via your ISP's smart host / smtp relay server is the better option in this case. We chose our ISPs smtp relay server here. 466 Also, on this tab you need to add the "local bridgehead" server (as shown above) On the tab "Address Space" we need to add a wildcard address space for SMTP. We want to allow emails to any domain, so we use the wildcard "*" here: Side note about the "Cost" entry: If you want to send emails to some domains via a different route you can create multiple SMTP connectors and set the "Cost" entry of this wildcard connector to a higher value while setting the cost entry of the special domain route to a lower cost but with only the special domain allowed on this page. This is especially useful if you generally want to send via DNS and only route to some systems that won't accept your email via some relay server. If your ISP's SMTP server requires authentication (and almost all of them do today) you can set the username and password on the "Advanced" tab of the SMTP connector. Select "Outbound Security": Select "Basic authentication" and chose "Modify" to enter the username and password: And that's alreay it - Your Exchange is now configured to send email to the internet and receive an SMTP email feed like it will come from POPcon or a direct internet connection. All you should do now is configure your users' email addresses in the Active directory. 467 4. Configuring your user's email addresses in the Active Directory You can set one or multiple email addresses for each user to receive email at. We will step through the neccessary actions when creating a new user called John Galt. First open the active directory and right-click the "Users" item to select "New", "User": The resulting dialog will allow you to create a new AD user to log into your server and creates an Exchange mailbox all in one wizard pass: Next... Next... Now the wizard continues into the Exchange Server realm and lets us create a new exchange mailbox We just accepted the default alias here. Next... 468 Ok, fine - but wait: What about our desired email address? [email protected]? We need to add this mail address manually. We are back at the AD configuration console and select the properties of our new user "John Galt" by right-clicking on the name: Lot's of tabs on this resulting dialog: We go to the "E-mail Addresses" tab: And surprise: [email protected] is already there, but in suspiciously nonbold print. Actually, Exchange automatically entered this additional email address because we choose so during the editing of the default recipient policies. But we want this address to be the primary address meaning all email sent by John will get this address as the "senders" and "reply" addresses in the mail headers. So we click on "Set As Primary" and are done: We could also add more email addresses like [email protected] or [email protected] but only one of these addresses can be the primary address that will be the default senders' address in all emails sent out by john. And that's really it - just step through you other user's AD entries and set the appropriate primary and additional email addresses. 469 5. Installing and configuring POPcon or POPcon PRO After going through the above 4 steps your Exchange is configured to send out email but it still can't pull down email from POP3 or IMAP mailboxes on your provider server. For this you need to install and configure POPcon. Configuring POPcon is quite straightforward. You need to follow these steps: a) Configure a Postmaster email address on the GENERAL configuration tab. b) Add one or more POP3 mailboxes on the POP3/IMAP tab. c) Configure the Exchange server name on the EXCHANGE configuration tab. Download and run the self-extracting installer of POPcon or POPcon PRO and follow the instructions during the installation. It will install the POPcon Administrator program and the POPcon service that runs in the background on your system. Run POPcon Adminstrator from Start > Programs > POPcon POPcon Screenshot Click on "Configure" to open up the POPcon configuration screen. a) Configure a Postmaster email address on the GENERAL configuration tab. On this first configuration page you only need to enter the email address of your Postmaster or Administrator user. The Postmaster will receive all emails without a valid recipient as well as general POPcon status notifications. It is very important to define a real email address from inside your exchange server here because 470 mails can be lost irretrievably if POPcon forwards some mail with no recipient information to the postmaster and that account does not exist in your exchange server. You can leave the log file options to their default settings for now. Next go to the POP3/IMAP tab to configure the POP3 or IMAP mailbox accoutns you want POPcon to download email from. b) Add one or more POP3 mailboxes on the POP3/IMAP tab. POPcon PRO collects mail from as many POP3 accounts you like. Just click on Add to add another POP3 host or account to the list of Polled POP3 Hosts. For each server or account you need to fill in the POP3 server settings as shown below. If you are using catch-all style mailboxes (mailboxes that receive email for a whole domain, regardless of the recipient part before the "@") POPcon needs to filter recipients from incoming mail so only the recipients at your own internet domain are accepted. Please add the domain you consider your own in the "Accepted Recipient Domains" box. This is the same domain you configured earlier in the Exchange Default Policy. Individual account settings This dialog lets you input the specifics about a POP3 or an IMAP server you want to have polled by POPcon PRO. This is the information POPcon PRO needs to know about each server: Server type: Here you can select on the four supported server types: POP3: Default. POP3 servers are by far the most common mail server types on the internet. 471 POP3-SSL: Some POP3 Servers need SSL encryption enabled for the connection in order to protect passwords and sensitive information. Choose this type to have a SSL-encrypted connection to a POP3 server. IMAP: IMAP Servers are also quite common and theoretically allow the client to manipulate email folders and move email between folders online. In our case the protocol is used to download email from the INBOX of the IMAP server to your exchange server. IMAP-SSL: Supports SSL connections to IMAP servers for added protection. Access: Configure the server name, account name and password to connect to the mail server here. Servername: The name the server you want to have polled. You can also enter the IP address directly. Username: The username needed to log into your POP3 or IMAP mail server. Password: The password needed to log into your mail server. IP portnumber: Almost always the TCP/IP port for POP3 mail is 110. Under some circumstances, internet routers or firewalls change the port number. Please ask your network administrator or internet provider. The standard port for POP3SSL is 995, for IMAP it is 143 and for IMAP-SSL this should be set to 993. Timeout: Leave this to the default value. Please ask your POP3 mailbox hosting provider if you do not have the above information. Type of mailbox / distribution: POPcon PRO supports both catch-all and single user mailboxes Catch-all mailbox ("*@domainname.com"): For this type of mailbox, POPcon PRO will distribute the email retrieved from this server according to what it finds in the TO:, CC:, BCC: and other header-fields of the mail. If you choose this option, don’t forget to add your internet domain name(s) to the "Accepted Recipient Domains" box. on the POP3/IMAP configuration dialog 472 Single user mailbox ("[email protected]"): This type of mailbox receives email for only one specific Exchange mailbox. You need to specify the receiver of the email here. POPcon PRO will then direct all mail retrieved from this server to the recipient email address given here. Delete / keep email on the server: This block allows you to configure POPcon PRO to either delete email after downloading or keep it on your POP3 or IMAP server for a specified amount of time or indefinitely. Delete downloaded email: This is the default setting – POPcon PRO will delete the Email on your POP3 or IMAP server after successfully downloading it. Leave a copy of downloaded email (indefinitely): This option will cause POPcon PRO to leave a copy of the email on the server. Only use this option during testing or when you are sure the mail will be deleted eventually, i.e. by another system periodically downloading an deleting email. Leave a copy of downloaded email for n number of days: Causes POPcon PRO to leave a copy of the email on the POP3/IMAP server for the specified number of days before deleting it. You can use this option to allow access to a single POP3 or IMAP mailbox by two different systems. c) Configure the Exchange server name on the EXCHANGE configuration tab. On this configuration screen you can specify the Exchange™-(SMTP) Server you want the mail to be directed to. Normally this will be the computer name of your Exchange™ server (like "MYSERVER"). You can leave all other settings default These three steps to configure POPcon will provide you with a working set-up. Test it out by confirming the new configuration with OK and then use the "Trigger mail retrieval" button on the POPcon Administrator main screen to start the first mail download. You can follow what is happening in the scrolling log 473 display on that screen. Watch out for any error messages there. There is also a POPcon log file (c:\program files\POPcon\POPconSrv.log – open with notepad) that you can view at your leisure. 6. Check out the ChangeSender Outlook Add-in adds one important piece of functionality to Microsoft Outlook when used with Exchange Server: It allows you to send as any of your email addresses and even group addresses or those of other users if allowed by the administrator. Effectively this is the Exchange Send-as function without the limitations of the ActiveDirectory ChangeSender Exchange Send-as Add-in Without the ChangeSender Exchange send-as components, Exchange always sends out emails on your default email address fixed in the ActiveDirectory even when answering emails received on one of your additional email addresses. Also, Exchange does not allow sharing the same email address (i.e. department-wide or company-wide email addresses) between users. ChangeSender solves both problems by adding a configurable "send as" selection box to your Outlook email form. ChangeSender Features Automatically selects the right send-as address when replying to emails. ChangeSender uses the address of the original email as sender address for replies. Easy selection of send as addresses for new emails via a new sender address selection box in Outlook. Multiple users can send from the same sender address (i.e. send as [email protected] or [email protected]) Sender appearance fully configurable as "Any name" <[email protected]> for each individual email address. Does not show up as "sent on behalf of...". Very simple installation and administration. 474 Administrator can restrict or allow user choices for the sender address and prevent users from sending as other users. Works with Exchange 2010, 2007, 2003, 2000 and with Outlook 2010, 2007, 2003, 2002, 2000 versions. ChangeSender in Outlook 2007 screenshot Downloads Download the free 30-day trial version of ChangeSender and test the full product without any restrictions until you are sure it meets all your requirements. Then just order license codes to remove the 30 day limit without re-installing. ChangeSender consists of two separate components: A server component to be installed on the Exchange server and a Microsoft Outlook add-in component that is needed for each client. The Outlook add-in does not work without the server component installed as well. Server component: Download Exchange Send-as server component, Exchange 2000, 2003 version Install this on the Exchange Server (this version for Exchange 2000 or 2003) Download Exchange Send-as server component, Exchange 2007, 2010 version Install this on the Exchange Server (this version for Exchange 2007 or 2010) Client component / Outlook add-in: Download Exchange Send-as Outlook add-in Install this on each user's system. You can license ChangeSender Exchange Send-as online and will receive the license codes by email in just minutes. 475 Copyright © 2010 Servolutions. Disclaimer. If you have questions, comments or problems regarding this website please contact: [email protected] Step-by-Step: Migrating Exchange 2000 to Exchange 2003 Using New Hardware Migrate your mail system from Exchange 2000 Server running on a Windows 2000 Server system to a new server running Exchange Server 2003 on Windows Server 2003. This scenario will take you through all Exchange-related issues from adding your first Windows Server 2003 system to unplugging your old Exchange 2000 system when finished. Published: Sep 28, 2004 Updated: Sep 28, 2004 Section: Migration & Deployment Author: David Fosbenner Printable Version Adjust font size: Rating: 4.6/5 - 508 Votes vote 476 1 2 3 4 5 If you simply want to do an in-place upgrade of Exchange 2000 to Exchange 2003 using the same server, you’ve got it made – Microsoft has explained the process of upgrading and made it pretty simple. Even if you’re still using Exchange v5.5, Microsoft has you covered with a wealth of documentation to peruse. But what if you’re an Exchange 2000 organization that wants to bring in a new Exchange 2003 system alongside your existing machine, move all your content over to it, and decommission the original box? Then you’re left scratching your head. At the time of this writing, there is no guide I’ve been able to find that explains the process with any detail. This document will explain the process, combining information from numerous sources as well as my own experience. It’s very easy to bring Exchange Server 2003 into your Exchange 2000 organization, with minimal disruption to your existing server or your users. This document assumes you have an Exchange 2000 organization running in native mode. Henceforth, the Exchange 2000 system will be referred to as the “old” server, and the Exchange 2003 system will be referred to as the “new” server. I. Prepare your Network for Windows Server 2003 Regardless of how you intend to get to Exchange 2003, there are some basic steps that must be done. 1. Begin by reviewing Microsoft’s 314649 – “Windows Server 2003 adprep /forestprep Command Causes Mangled Attributes in Windows 2000 Forests That Contain Exchange 2000 Servers” This article explains that if you have Exchange 2000 installed in your organization, and you proceed 477 with installing your first Windows Server 2003 system (and its accompanying schema modifications), you may end up with some mangled attributes in AD. Preventing this from happening is simple enough: a script called Inetorgpersonfix.ldf will do the trick. 2. Run adprep /forestprep from Windows Server 2003 CD on your Windows 2000 server that holds the Schema master FSMO role. (Of course you’ll need to be a member of Schema Admins). Be sure to replicate the changes throughout the forest before proceeding. 3. Run adprep /domainprep from Windows Server 2003 CD on your Windows 2000 server. I ran it on the system holding the PDC Emulator FSMO role. 4. Before bringing a new Windows Server 2003 system online, it’s a good idea to review your third-party server utilities and upgrade them to the latest versions to ensure compatibility. In my installation, this included the latest versions of BackupExec, Symantec Antivirus Corp. Edition, and Diskeeper. 5. Run setup /forestprep from the Exchange Server 2003 CD on the Windows 2000 server that holds the Schema master FSMO role. Replicate the changes throughout the forest. 6. Run setup /domainprep from the Exchange Server 2003 CD on a Windows 2000 server. Again, I ran it on the system holding the PDC Emulator role. II. Install Windows Server 2003 1. Install Windows Server 2003 on the new server, join it to the domain, then apply all hotfixes to the server to bring it up to date. 2. In AD, move the server object to the desired OU. 3. If you’re paranoid like me, you may be tempted to install antivirus (AV) software on your new server at the earliest opportunity. Hold off on that for now. 4. Review Microsoft’s 815372 – “How to optimize memory usage in Exchange Server 2003” which explains a number of settings required for Exchange Server 2003. Specifically, you may need to add the /3GB and /userva=3030 switches to boot.ini, or you will have event 9665 in the 478 event log. I also had to change the HeapDeCommitFreeBlockThreshold value in the registry at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Sessio n Manager\ to 0x00040000 as directed in the article. 5. Review Microsoft’s 831464 – “FIX: IIS 6.0 compression corruption causes access violations”. I obtained the fix from Microsoft, and you should do the same, as it fixes some nasties that may interfere with OWA. III. Install Exchange Server 2003 1. If you have installed any AV software on the new server, stop all AVrelated services now, or you may experience a failed Exchange installation as I did. 2. Download the latest copy of the Exchange Server 2003 Deployment Tools, version 06.05.7226 as of this writing. 3. To begin the Exchange Server 2003 install on your new server, run Exdeploy.hta after extracting the tools. 4. Choose “Deploy the First Exchange 2003 Server” 5. You’ll want to choose the item for your current environment, which in the context of this article is “You are running Exchange 2000 in native mode and you want to upgrade a server or install the first new Exchange 2003 server.” Choose “Upgrade from Exchange 2000 Native Mode”. 6. Run through the entire checklist and perform all the steps and tests. When you get to Step 9 in Exdeploy, you’ll need to specify the path to the Exchange Server 2003 CD since you’re running Exdeploy from a location other than the CD. 7. Install all the Exchange components unless you have a compelling need to do otherwise. 8. When the install is completed, install Exchange Server 2003 Service Pack 1. 9. When SP1 is completed, run the Exchange System Manager from the Windows Server 2003 system, and you will see your new server listed in the Exchange organization, as well as your old server. 479 10. The POP3 and IMAP4 services aren’t set to start automatically, so configure them for Automatic startup if desired. 11. If you want to install or enable antivirus software, it’s now safe to do so. IV. Get Familiar with Exchange Server 2003 At this point, you now have an Exchange 2003 system running in your existing Exchange organization. Microsoft has done a good job of allowing the two versions to coexist. Before proceeding with your migration, there are a number of important tasks to consider at this stage. For openers, communicate with your users about the migration if you haven’t already, brief them on the new OWA interface, and by all means ask them to go through their mailboxes and delete old, unneeded items. You’ll appreciate this later! This is a good opportunity to spend some time reviewing your new Exchange server. Even if you spent time learning the new product in a lab environment (as you should have), exploring the system now before proceeding makes sense. Check out the new ESM, move a test mailbox to the new server, and try OWA. Go through your old server and take note of any settings you want to configure on the new system such as size limits on SMTP connectors or incoming/outgoing messages, etc. You’ll find that Exchange Server 2003 is configured to block mail relaying by default. This is a good time to uninstall the Exchange 2000 version of the ESM remote management tools (using the Exchange 2000 Server CD, run Setup, choose Remove) on any management workstations and install the new Exchange 2003 ESM, which can be used to manage both versions of Exchange server. As you test message routing, you will find that any email coming into your organization from the outside will be automatically routed to the appropriate Exchange server where the mailbox resides. My test mailbox on the new server could send and receive mail, no problem. I could also access the mailbox with Outlook or OWA from within the organization, no problem. However, I was unable to access mailboxes on the new server from outside the organization. In my configuration, an ISA Server 2000 system acts as the firewall, where web and server publishing rules exist to redirect incoming traffic to the old mail server. There was no simple way I could find to allow simultaneous access to both the old and the new servers. All incoming mail-related traffic was directed to the old server. This limitation affected the rest of the migration as you will see. 480 Note: There is a way to have multiple Exchange servers, both 2000 and 2003, behind a firewall, whereby mail is automatically directed to the appropriate server. This scenario involves installing Exchange Server 2003 on a server and configuring it as a “front end” server, which allows it to act as a proxy. Unfortunately, the front end server cannot hold any mailboxes on its own, so this isn’t an option in the migration scenario in this article. Note: For a front end server to make any sense, a minimum of three servers would be needed: the front end server itself, and at least two Exchange servers, to which the front end server would route messages, based on the mailboxes homed on each. In our migration scenario, one could have a front end server routing mail to the old Exchange 2000 server and the new Exchange 2003 server. As mailboxes are moved from the old to the new server, the front end server would route messages to the correct place. This is a nice option for those with the hardware and the desire to do a gradual transition. V. Configure Exchange Server 2003 to Host Public folders and Other Roles As you begin moving folders and roles to the new server, one thing I learned the hard way is that you should use the ESM running on the new server. I used the 481 ESM on a Windows XP remote management workstation, and found that things reported on the workstations’s ESM weren’t always the same as the Exchange server’s ESM. 1. Review Microsoft’s 307917 – “XADM: How to Remove the First Exchange 2000 Server Computer from the Site”. This document contains most of what is needed to finish this migration, and explains in detail how to setup replication of Public folders. 2. Using the instructions in 307917 as a guide, setup replication for all public folders that were created by your organization on your old server. Do not setup replication for any folders you didn’t create, as several of these will not be brought over to the new server. When the folders you replicated are in sync, remove the old server from the replication tab. These folders now exist solely on the new server. They are accessible to those within your WAN, but are inaccessible outside your firewall. 3. You should find that the Public folders called default and ExchangeV1 are already replicated to the new server. Using Step 2 and 3 in 307917, setup replication to the new server for the folders Offline Address Book, OAB Version 2, and Schedule+ Free Busy Information. If you have a folder called Internet Newsgroups, you should replicate that also. This folder is created by the Exchange system, though your organization may not use it. 4. If you check the Properties, Replication tab on your administtrative group’s Folders node, you will see the replication interval for the public folders. Unless you specifically changed the interval on any individual public folders, they should follow this schedule. “Always run” means replication will run every 15 minutes. There is no “replicate now” option. 482 5. Using Step 4 & 5 in 307917, rehome RUS and designate the new server as the routing group master. 6. Step 6 and 7 in 307917 didn’t apply in my configuration; proceed with those as needed. 7. Using Microsoft’s 265293 – “How to Configure the SMTP Connector in Exchange”, add the new server to the SMTP connector, remove the old server, then cycle the MS Exchange Routing Engine service and the SMTP service for these changes to take effect. Send a test message to verify the new server is sending the mail now. 8. There are a number of public folders on the Exchange 2000 server that do not need to be replicated and moved to the new server, including several that are part of the Exchange 2000 version of OWA. On my system these included: 483 Controls Event Config_<old server name> Events Root Exchweb Img Microsoft Offline Address Book – First Administrative Group Schema-root Views Just leave these folders on the old server. At this point, with the exception that your public folders are no longer accessible outside your firewall, there shouldn’t be any noticable difference to your users. You can accomplish all of the above during normal working hours without much fuss. However, the next step isn’t as transparent. VI. Move the Mailboxes to Exchange Server 2003 This is the moment we’ve all been waiting for, and it’s pretty straightforward. In order for this process to go as smoothly as possible, you should make sure that no users inside your organization are accessing the email system. You should also block all external access to your mail servers. 1. You can read a detailed description of moving mailboxes, see Henrik Walther’s “Moving Mailboxes with the Exchange 2003 Move Mailbox Wizard” article for specifics. 2. Prevent outside access to your mail servers. In my case, this involved disabling the web and server publishing rules for IMAP4, POP3, and SMTP in my ISA Server 2000 system. 3. Make sure no internal users are accessing the mail server. 4. Turn off AV on both the old and the new server. Moving mailboxes is a time-consuming, resource-intensive process. AV scanning will slow this 484 process down, and in some cases can cause problems when large scale data is being moved. 5. The Move Mailbox Wizard will allow you to select many mailboxes at a time, but it will only process four at a time. I chose the “Create a failure report” option, which won’t move the mailbox if there are errors. I moved 75 mailboxes, 1.7GB of data, in 70 minutes, without a single error. 6. The key determining factor in the speed of the mailbox move process isn’t so much size as it is the number of items in a mailbox. If your users deleted a lot of items per your request, the process will go a lot quicker now. 7. If you want to test your new system before moving all the mailboxes, you can move a handful of them, then turn on outside access (I would turn on AV as well). Keep in mind, you’ll need to configure your firewall to point to the new mail server. You should be able to access the new mailboxes with OWA and POP3 mail applications like Outlook. You can also test access to Public folders in OWA if desired. Be sure to disable external access and AV before proceeding. 8. Move all the mailboxes, except SystemMailbox, System Attendant, and SMTP-ServerName, as these should already exist on the new server. 9. When the process is finished, configure your firewall to point to the new mail server, turn on AV, and enable external access. You are now running an Exchange Server 2003 mail system. VII. Final Cleanup 1. Go through the public folders on the new server and remove the old server from the replication tab for any public folders that are still replicating to it. On my system this included default and ExchangeV1. 2. Have your clients logon to their email clients. Outlook will attempt to connect to the old mail server, but as long as the Exchange services are still running on it, it will automatically redirect Outlook to the new server. 3. Stop all the Exchange services on the old server. Stop IISAdmin, which should stop FTP, NNTP, SMTP, and WWW. 485 4. Your old server will still appear in the Exchange organization in the ESM, but that’s OK for now. You may also see an entry in the Queues node on the new server, destined for the old server. You can ignore this also. 5. Allow your new server for run for a few days if desired, keeping the old system in its present state for the time being. You may even want to turn it off. 6. When you’re satisfied that the migration is a success and the old server is no longer needed, insert the Exchange 2000 Server CD into the old server, run setup, and remove/uninstall Exchange 2000. Make sure the server is still connected to the network when you do this, as this process will remove the old server from the ESM. Congratulations! Because you began with an Exchange 2000 organization in native mode, your Exchange Server 2003 system is in native mode. Your migration is finished.