Search Results

Search found 23555 results on 943 pages for 'command timeout'.

Page 477/943 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • SQL Server 2008 cluster freezing

    - by Ed Leighton-Dick
    We have run into a strange situation in which a SQL Server 2008 single-node cluster hangs. As background, we are rebuilding a Windows Server 2003/SQL Server 2005 two-node cluster using Windows 2008 and SQL Server 2008. Here's the timeline: Evicted the passive node (server B) from the Windows 2003/SQL 2005 cluster. The active node now functions as a single-node cluster with no problems. Wiped server B's disks and installed Windows 2008 and SQL Server 2008 as a single-node cluster. Since we do not want to the two clusters to communicate yet, we left the cluster's private network "heartbeat" adapter unconfigured. The cluster comes up and functions normally. Moved all databases to the new cluster. Cluster continues to function normally. Turned off server A (old cluster) in preparation for rebuilding as the second node of the new cluster. SQL Server instance on server B (new cluster) locks up, even though it should have no knowledge of or interaction with server A. Restarted server A. SQL Server instance on server B (new cluster) immediately begins working again. Things we have tried: The new cluster's name responds to ping and NETBIOS requests, even while the SQL Server is hung. We have confirmed that no IP address is assigned to the old heartbeat adapter, and it is not pulling an IP address from DHCP. Disabling the heartbeat's network card has the same effect. No errors were generated in any logs - Windows or SQL. When the error first occurred, it sat in the hung state for quite some time (well over 10 minutes) before anyone figured out what was going on. This would seem to eliminate any sort of normal cluster timeout in which it would have been searching for the other node (even if one had been configured). Server B is running Windows 2008 SP2, fully patched, and SQL Server 2008 SP1 CU7 (10.0.2775).

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • gksudo waits for a few seconds after execution

    - by phoenix
    i'm frequently using application launchers to run personal bash scripts and thus i often use gksudo in case i do administrative tasks. the problem is that when i execute a command with gksudo,the execution is successful, but afterwards gksudo waits for about 5 seconds before it closes/finishes. in some scripts i use gksudo multiple times, resulting in execution times of a few minutes, even though everything should be done in a few seconds. can anyone help me here? ps: here are my main /etc/sudoers-settings (might have something to do with my problem): Defaults env_reset,!tty_tickets,timestamp_timeout=2 phoenix ALL= NOPASSWD: /bin/mount,/bin/umount,/usr/sbin/firestarter,/usr/bin/truecrypt,/usr/bin/apt-get

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • How do I install Dan's Guardian on 12.04?

    - by Matt
    I'm trying to install Dans Guardian on a virtual machine. The instructions ask me to run the ./configure script and then execute the command make install. The configure script runs fine but the make install throws errors. Making all in src make[2]: Entering directory `/webmin/dansguardian-2.10/src' g++ -DHAVE_CONFIG_H -I. -I.. -D__CONFFILE='"/usr/local/etc/dansguardian/dansguardian.conf"' -D__LOGLOCATION='"/usr/local/var/log/dansguardian/"' -D__PIDDIR='"/usr/local/var/run"' -D__PROXYUSER='"nobody"' -D__PROXYGROUP='"nobody"' -D__CONFDIR='"/usr/local/etc/dansguardian"' -g -O2 -MT dansguardian-fancy.o -MD -MP -MF .deps/dansguardian-fancy.Tpo -c -o dansguardian-fancy.o `test -f 'downloadmanagers/fancy.cpp' || echo './'`downloadmanagers/fancy.cpp downloadmanagers/fancy.cpp: In member function âstd::string fancydm::timestring(int)â: downloadmanagers/fancy.cpp:507:72: error: âsnprintfâ was not declared in this scope make[2]: *** [dansguardian-fancy.o] Error 1 make[2]: Leaving directory `/webmin/dansguardian-2.10/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/webmin/dansguardian-2.10' make: *** [all] Error 2 I'm running 12.04 LTS server x64

    Read the article

  • Jailbroken iPad 3G Is Capable of Sending SMS Text Messages

    - by Gopinath
    Wow! the iPhone Dev Team guys are crazy hackers, they don’t leave any iPhone/iPad OS without jail breaking it. Today the iPhone Dev team cracked the operating system of  iPad 3G and managed to send SMS from it using command line terminal interface. Here is the video demonstration of iPad 3G sending SMS Even though there is no user interface for sending SMS, this is a great achievement for the iPad jail breaking community. So what is next to come on iPad? Phone calls! Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • How to set selinux?

    - by Enrique Videni
    I installed selinux first, I set SELINUX=enforcing instead of its original value, SELINUX=permissive in /etc/selinux/config, then I reboot my computer. I waited for some time, but it stopped, I rebooted again and it can not go into the system anymore so I restored the setting. I tried to run seinfo command in a terminal, but it output some errors below: ERROR: policydb version 26 does not match my version range 15-24 ERROR: Unable to open policy /etc/selinux/ubuntu/policy/policy.26. ERROR: Input/output error It seems that there is a little difference on how to start up selinux between CentOS and Ubuntu, can you help me configure selinux in Ubuntu?

    Read the article

  • Task-It Webinar - Source Code

    Last week I presented a webinar called "Building a real-world application with RadControls for Silverlight 4". For those that didn't get to see the webinar, you can view it here: Building a read-world application with RadControls for Silverlight 4 Since the webinar I've received several requests asking if I could post the source code for the simple application I showed demonstrating some of the techniques used in the development of Task-It, such as MVVM, Commands and Internationalization. This source code is now available for downloadhere. After downloading the source: Extract it to the location of your choice on your hard-drive Open the solution Right-click ModuleProject.Web and selecte 'Set as StartUp Project'. Right-click ProjectTestPage.aspx and selected 'Set as Start Page' Create a database in SQL Server called WebinarProject. Navigate to the Database folder under the WebinarProject directory and run the .sql script against your WebinarProject database. The last two steps are necessary only for the Tasks page to work properly (using WCF RIA Services). Now some notes about each page: Code-behind This is not the way I recommend coding a line-of-business application in Silverlight, but simply wanted to show how the code-behind approach would look. Command This page introduces MVVM and Commands. You'll notice in the XAML that the Command property of theRadMenuItem and the Button are both bound to a SaveCommand. That comes from the view model. If you look in the code- behind of the user control you'll see that an instance of a CommandViewModel is instantiated and set as the DataContext of the UserControl.There is also a listener for the view model's SaveCompleted event. When this is fired, it tells the view (UserControl) to display the MessageBox. Internationalization This sample is similar to the previous one, but instead of using hard-coded strings in the UI, the strings are obtained via binding toview model properties. The view model gets the strings from the .resx files (Strings.resx or Strings.de.resx) under Assets/Resources. If you uncomment the call to ShowGerman() in App.xaml.cs's Application_Startup method and re-run the application, you will see the UI in German. Note that this code, which sets the CurrentCulture and CurrentUICulture on the current thread to "de" (German) is for testing purposes only. RadWindow Once again, very similar to the previous example.The difference is that we are now using a RadWindow to display the 'Saved' message instead of a MessageBox. The advantage here is that we do not have to hold on to a reference to the view model in our code behind so that we can get the 'Saved' message from it. The RadWindow's DataContext is now also bound to the view model, so within its XAML we can bind directly to properties in the view model. Much nicer, and cleaner. One other thing I introduced in this example is the use of spacer Rectangles. Rather than setting a width and/or height on the rectangles for spacing, I am now referencing a style in my ResourceDictionary called StandardSpacerStyle. I like doing this better than using margins or padding because now I have a reusable way to create space between elements, the Rectangle does not show (because I have not set its Fill color), and I can change my spacing throughout the user interface in one place if I'd like. Tasks This page is quite a bit different than the other four. It is a very simple, stripped-down version of the Tasks page in the Task-It application. The Tasks.xaml UserControl has a ContentControl, and the Content of that control is set based on whether we are looking at the list of tasks or editing a task. So it displays one of two child UserControls, which are called List and Details. List has the RadGridView, Details has the form. In the code-behind of the Tasks UserControl I am once again setting its DataContext to a view model class. The nice thing is, whichever child UserControl is being displayed (List or Details) inherits its DataContext from its parent control (Tasks), so I do not have to explicitly set it. The List UserControl simply displays a RadGridView whose ItemsSource is bound to a property in the view model called Tasks, and its SelectedItem property is bound to a property in the view model called SelectedItem. The SelectedItem binding must be TwoWay so that the view is notified when the SelectedItem changes in the view model, and the view model is notified when something changes in the view (like when a user changes the Name and/or DueDate in the form). You'll also notice that the form's TextBox and RadDatePicker are also TwoWay bound to the SelectedItem property in the view model. You can experiment with the binding by removing TwoWay and see how changes in the form do not show up in the RadGridView. So here we have an example of two different views (List and Details) that are both bound to the same view model...and actually, so is the Tasks UserControl, so it is really three views. WCF RIA Services By the way, I am using WCF RIA Services to retrieve data for the RadGridView and save the data when the user clicks the Save button in the form. I created a really simple ADO.NET Entity Data Model in WebinarProject.Web called DataModel.edmx. I also created a simple Domain Data Service called DataService that has methods for retrieving data, inserting, updating and deleting. However I am only using the retrieval and update methods in this sample. Note that I do not currently have any validation in place on the form, as I wanted to keep the sample as simple as possible. Wrap up Technically, I should move the calls to WCF RIA Services out of the view model and put them into a separate layer, but this works for now, and that is a topic for another day! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Linux error when resume from RAM

    - by TuxPotato
    The last two times that I have resumed my laptop from sleep, it has hung and given me this set of errors: [drm:atom_op_jump] *ERROR* atombios stuck in loop for more than 1sec aborting [drm:atom_execute_table_locked] *ERROR* atombios stuck executing E692 (len 460, WS 0, PS 4) @0xE6D3 hda_intel: azx_get_responce timeout, switching to single_cmd mode: last cmd=0x01170700 ata6: softreset failed (device not ready) ata4: softreset failed (device not ready) [drm:atom_op_jump] *ERROR* atombios stuck in loop for more than 1sec aborting [drm:atom_execute_table_locked] *ERROR* atombios stuck executing E692 (len 460, WS 0, PS 4) @0xE6D3 The last two messages repeat two more times. The first time this happened, Linux's Magic SysRq worked and did a soft reboot, and after that everything was fine till it went to sleep again. It wakes up and gives me this. Here are the laptop stats: Toshiba Satellite L455D-S5976 AMD Sempron SI-42 Processor 2GB DDR2 RAM HD TruBrite Display ATI Radeon Graphics (integrated) Running Ubuntu 10.10 32 bit I'm not sure about the hard drive, but its a 250GB drive with one NTFS partition, two Hidden NTFS, one Linux Swap, and one Ext4. Can someone tell me whats wrong with my laptop? NOTE: This only happens when I close the screen. My computer doesn't go to sleep with the screen open.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Integrating Code Metrics in TFS 2010 Build

    - by Jakob Ehn
    The build process template and custom activity described in this post is available here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/CodeMetricsSample.zip Running code metrics has been available since VS 2008, but only from inside the IDE. Yesterday Microsoft finally releases a Visual Studio Code Metrics Power Tool 10.0, a command line tool that lets you run code metrics on your applications.  This means that it is now possible to perform code metrics analysis on the build server as part of your nightly/QA builds (for example). In this post I will show how you can run the metrics command line tool, and also a custom activity that reads the output and appends the results to the build log, and also fails he build if the metric values exceeds certain (configurable) treshold values. The code metrics tool analyzes all the methods in the assemblies, measuring cyclomatic complexity, class coupling, depth of inheritance and lines of code. Then it calculates a Maintainability Index from these values that is a measure f how maintanable this method is, between 0 (worst) and 100 (best). For information on hwo this value is calculated, see http://blogs.msdn.com/b/codeanalysis/archive/2007/11/20/maintainability-index-range-and-meaning.aspx. After this it aggregates the information and present it at the class, namespace and module level as well. Running Metrics.exe in a build definition Running the actual tool is easy, just use a InvokeProcess activity last in the Compile the Project sequence, reference the metrics.exe file and pass the correct arguments and you will end up with a result XML file in the drop directory. Here is how it is done in the attached build process template: In the above sequence I first assign the path to the code metrics result file ([BinariesDirectory]\result.xml) to a variable called MetricsResultFile, which is then sent to the InvokeProcess activity in the Arguments property. Here are the arguments for the InvokeProcess activity: Note that we tell metrics.exe to analyze all assemblies located in the Binaries folder. You might want to do some more intelligent filtering here, you probably don’t want to analyze all 3rd party assemblies for example. Note also the path to the metrics.exe, this is the default location when you install the Code Metrics power tool. You must of course install the power tool on all build servers. Using the standard output logging (in the Handle Standard Output/Handle Error Output sections), we get the following output when running the build: Integrating Code Metrics into the build Having the results available next to the build result is nice, but we want to have results integrated in the build result itself, and also to affect the outcome of the build. The point of having QA builds that measure, for example, code metrics is to make it very clear how the code being built measures up to the standards of the project/company. Just having a XML file available in the drop location will not cause the developers to improve their code, but a (partially) failing build will! To do this, we need to write a custom activity that parses the metrics result file, logs it to the build log and fails the build if the values frfom the metrics is below/above some predefined treshold values. The custom activity performs the following steps Parses the XML. I’m using Linq 2 XSD for this, since the XML schema for the result file is available, it is vey easy to generate code that lets you query the structure using standard Linq operators. Runs through the metric result hierarchy and logs the metrics for each level and also verifies maintainability index and the cyclomatic complexity with the treshold values. The treshold values are defined in the build process template are are sent in as arguments to the custom activity If the treshold values are exceeded, the activity either fails or partially fails the current build. For more information about the structure of the code metrics result file, read Cameron Skinner's post about it. It is very simpe and easy to understand. I won’t go through the code of the custom activity here, since there is nothing special about it and it is available for download so you can look at it and play with it yourself. The treshold values for Maintainability Index and Cyclomatic Complexity is defined in the build process template, and can be modified per build definition: I have taken the default value for these settings from my colleague Terje Sandström post on Code Metrics - suggestions for approriate limits. You’ll notice that this is quite an improvement compared to using code metrics inside the IDE, where Red/Yellow/Green limits are fixed (and the default values are somewaht strange, see Terjes post for a discussion on this) This is the first version of the code metrics integration with TFS 2010 Build, I will proabably enhance the functionality and the logging (the “tree view” structure in the log becomes quite hard to read) soon. I will also consider adding it to the Community TFS Build Extensions site when it becomes a bit more mature. Another obvious improvement is to extend the data warehouse of TFS and push the metric results back to the warehouse and make it visible in the reports.

    Read the article

  • Server taking too long to respond error

    - by DCJones
    This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • sudo apt-get update error

    - by Kapil Anand
    I got the following error Reading package lists... Done W: GPG error: http://extras.ubuntu.com oneiric Release: Unknown error executing gpgv executing gpgv ---- W: GPG error: http://archive.ubuntu.com oneiric-updates Release: Unknown error executing gpgv Then after googling it i found and followed the following but that caused one error **sudo -i apt-get clean cd /var/lib/apt mv lists lists.old mkdir -p lists/partial apt-get clean apt-get update** while running i got the error " kapil@ubuntu:/var/lib/apt$ sudo mv lists lists.old mv: cannot move `lists' to `lists.old/lists': Directory not empty " so once again running the update command I got the same error again. Please help me what should i do?

    Read the article

  • Create Your Own Geeky LED Holiday Lights with Old Bottles

    - by YatriTrivedi
    Who needs to go buy store-bought lights? Here’s a great geek project for the holidays that’s fairly easy to put together with things most geeks already have. My friend, Chris “Groff” Groff, had the great idea to work up some holiday lights using stuff he had lying around, and a few hours later things turned out quite geeky indeed. Materials Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Fun and Colorful Firefox Theme for Windows 7 Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released

    Read the article

  • Redmine install not working and displaying directory contents - Ubuntu 10.04

    - by Casey Flynn
    I've gone through the steps to set up and install the redmine project tracking web app on my VPS with Apache2 but I'm running into a situation where instead of displaying the redmine app, I just see the directory contents: Does anyone know what could be the problem? I'm not sure what other files might be of use to diagnose what's going on. Thanks! # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ # Enable fastcgi for .fcgi files # (If you're using a distro package for mod_fcgi, something like # this is probably already present) #<IfModule mod_fcgid.c> # AddHandler fastcgi-script .fcgi # FastCgiIpcDir /var/lib/apache2/fastcgi #</IfModule> LoadModule fcgid_module /usr/lib/apache2/modules/mod_fcgid.so LoadModule passenger_module /var/lib/gems/1.8/gems/passenger-3.0.7/ext/apache2/mod_passenger.so PassengerRoot /var/lib/gems/1.8/gems/passenger-3.0.7 PassengerRuby /usr/bin/ruby1.8 ServerName demo and my vhosts file #No DNS server, default ip address v-host #domain: none #public: /home/casey/public_html/app/ <VirtualHost *:80> ServerAdmin webmaster@localhost # ScriptAlias /redmine /home/casey/public_html/app/redmine/dispatch.fcgi DirectoryIndex index.html DocumentRoot /home/casey/public_html/app/public <Directory "/home/casey/trac/htdocs"> Order allow,deny Allow from all </Directory> <Directory /var/www/redmine> RailsBaseURI /redmine PassengerResolveSymlinksInDocumentRoot on </Directory> # <Directory /> # Options FollowSymLinks # AllowOverride None # </Directory> # <Directory /var/www/> # Options Indexes FollowSymLinks MultiViews # AllowOverride None # Order allow,deny # allow from all # </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /home/casey/public_html/app/log/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /home/casey/public_html/app/log/access.log combined # Alias /doc/ "/usr/share/doc/" # <Directory "/usr/share/doc/"> # Options Indexes MultiViews FollowSymLinks # AllowOverride None # Order deny,allow # Deny from all # Allow from 127.0.0.0/255.0.0.0 ::1/128 # </Directory> </VirtualHost>

    Read the article

  • Stupid Geek Tricks: Use Google Chrome Drag/Drop to Upload Files Easier

    - by The Geek
    There’s nothing more annoying than saving a file somewhere on your hard drive, and then having to browse for that file again when you’re trying to upload it somewhere on the web. Thankfully Google Chrome makes this process much easier. Note: this might potentially work in Firefox 4, but we didn’t take the time to test it out. It definitely doesn’t work in Firefox 3.6 or Internet Explorer Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released Hang in There Scrat! – Ice Age Wallpaper

    Read the article

  • Solaris TCP stack tuning

    - by disserman
    We have a large web project (about 2-3k requests per second), using haproxy (http://haproxy.1wt.eu/) as a frontend and load balancer between the java application servers. The frontend (haproxy) is running on Linux but we are going to migrate it to the Solaris 10 as all our other servers are running under Solaris. After switching a traffic I see the two things: a) the web site became loading slower (5-10 seconds with images in comparison to 2-3 seconds on Linux) b) sometimes haproxy fails to perform a "lifecheck" (get a special web page and analyze http response code) due to the socket timeout. After switching traffic back to Linux everything is okay. I've tried to tune all params I found in /dev/tcp but no progress. I believe the problem is in some open socket limitations. If someone can point me to the answer, I would be greatly appreciated. p.s. haproxy is running under Xen DomU on Linux (Kernel 2.6.18, Debian 5), under zone on Solaris (10 u8). the only thing we did on Linux is increasing of ip_conntrack_max (I believe Solaris option tcp_conn_req_max_q is the equivalent).

    Read the article

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Can't get TRIM test to work

    - by Matthew Marcus
    So I'm attempting to install TRIM using the walkthrough here: How to enable TRIM? But everytime I attempt to run the hdparm command, I get the following when I try to run it w/ sda: reading sector 5805056: FAILED: Input/output error and I get this when running it with sda1: /dev/sda1: Device /dev/sda1 has non-zero LBA starting offset of 2048. Please use an absolute LBA with the /dev/ entry for the full device, rather than a partition name. /dev/sda1 is probably a partition of /dev/sda (?) The absolute LBA of sector 5807104 from /dev/sda1 should be 5809152 Aborting. I'm running Natty in a VBox on Windows 7. Someone PLEASE help.. I keep getting this "consistency check" message on boot of my machine and I think it's because Ubuntu is writing to the same sectors on the VHD too much.. need to get trim working on this thing.. Thanks.

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • How can I automatically restart Apache and Varnish if can't fetch a file?

    - by Tyler
    I need to restart Apache and Varnish and email some logs when the script can't fetch robots.txt but I am getting an error ./healthcheck: 43 [[: not found My server is Ubuntu 12.04 64-bit #!/bin/sh # Check if can fetch robots.txt if not then restart Apache and Varnish # Send last few lines of logs with date via email PATH=/bin:/usr/bin THEDIR=/tmp/web-server-health [email protected] mkdir -p $THEDIR if ( wget --timeout=30 -q -P $THEDIR http://website.com/robots.txt ) then # we are up touch ~/.apache-was-up else # down! but if it was down already, don't keep spamming if [[ -f ~/.apache-was-up ]] then # write a nice e-mail echo -n "Web server down at " > $THEDIR/mail date >> $THEDIR/mail echo >> $THEDIR/mail echo "Apache Log:" >> $THEDIR/mail tail -n 30 /var/log/apache2/error.log >> $THEDIR/mail echo >> $THEDIR/mail echo "AUTH Log:" >> $THEDIR/mail tail -n 30 /var/log/auth.log >> $THEDIR/mail echo >> $THEDIR/mail # kick apache echo "Now kicking apache..." >> $THEDIR/mail /etc/init.d/varnish stop >> $THEDIR/mail 2>&1 killall -9 varnishd >> $THEDIR/mail 2>&1 /etc/init.d/varnish start >> $THEDIR/mail 2>&1 /etc/init.d/apache2 stop >> $THEDIR/mail 2>&1 killall -9 apache2 >> $THEDIR/mail 2>&1 /etc/init.d/apache2 start >> $THEDIR/mail 2>&1 # prepare the mail echo >> $THEDIR/mail echo "Good luck troubleshooting!" >> $THEDIR/mail # send the mail sendemail -o message-content-type=html -f [email protected] -t $EMAIL -u ALARM -m < $THEDIR/mail rm ~/.apache-was-up fi fi rm -rf $THEDIR

    Read the article

  • I cant add PPA repositories!

    - by Karthik Krishna
    For example, after running this command : sudo add-apt-repository ppa:tualatrix/ppa I get the following output : Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 125, in <module> ppa_info = get_ppa_info_from_lp(user, ppa_name) File "/usr/lib/python2.7/dist-packages/softwareproperties/ppa.py", line 80, in get_ppa_info_from_lp curl.perform() pycurl.error: (6, "Couldn't resolve host 'launchpad.net'") Why is it so. I just installed Ubuntu 12.04 LTS. And it works fine. I have updated and installed the system. I have even installed all required packages. But the thing is as soon as I want to install more packages, Like PPA and sort I am not able to. Till now I have not been able to install any PPA. I am working behind a proxy.

    Read the article

  • Month in Geek: January 2011 Edition

    - by Asian Angel
    With the end of the first month in 2011 upon us it is time to look back at our best and brightest for the month. Join us as we present the ten hottest articles from January for your reading enjoyment Latest Features How-To Geek ETC How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition How To Create Your Own Custom ASCII Art from Any Image How To Process Camera Raw Without Paying for Adobe Photoshop How Do You Block Annoying Text Message (SMS) Spam? Battlestar Galactica – Caprica Map of the 12 Colonies (Wallpaper Also Available) View Enlarged Versions of Thumbnail Images with Thumbnail Zoom for Firefox IntoNow Identifies Any TV Show by Sound Walk Score Calculates a Neighborhood’s Pedestrian Friendliness Factor Fantasy World at Twilight Wallpaper Hack a Wireless Doorbell into a Snail Mail Indicator

    Read the article

  • NTFS Issues in Windows 7 and 2008 R2 - 'Is it a Bug?'

    - by renewieldraaijer
    I have been using the various versions of the Microsoft Windows product line since NT4 and I really thought I knew the ins and outs about the NTFS filesystem by now. There were always a few rules of thumb to understand what happens if you move data around. These rules were: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. The same goes for moving data between disk partitions. Only when you move data within the same partition, the permissions are kept."  Recently I was asked to assist in troubleshooting some NTFS related issues. This forced me to have another good look at this theory. To my surprise I found out that this theory does not completely stand anymore. Apparently some things have changed since the release of Windows Vista / Windows 2008. Since the release of these Operating Systems, a move within the same disk partition results in the data inheriting the permissions of the location it is being copied into. A major change in the NTFS filesystem you would think!  Not quite! The above only counts when the move operation is being performed by using Windows Explorer. A move by using the 'move' command from within a cmd prompt for example, retains the NTFS permissions, just like before in Windows XP and older systems. Conclusion: The Windows Explorer is responsible for changing the ACL's of the moved data. This is a remarkable change, but if you follow this theory, the resulting ACL after a move operation is still predictable.  We could say that since Windows Vista and Windows 2008, a new rule set applies: "If you copy data, the copied data will inherit the permissions of the location it is being copied to. Same goes for moving data between disk partitions and within disk partitions. Only when you move data within the same partition by using something else than the Windows Explorer, the permissions are kept." The above behavior should be unchanged in Windows 7 / Windows 2008 R2, compared to Windows Vista / 2008. But somehow the NTFS permissions are not so predictable in Windows 7 and Windows 2008 R2. Moving data within the same disk partition the one time results in the permissions being kept and the next time results in inherited permissions from the destination location. I will try to demonstrate this in a few examples: Example 1 (Incorrect behavior): Consider two folders, 'Folder A' and 'Folder B' with the following permissions configured.                    Now we create the test file 'test file 1.txt' in 'Folder A' and afterwards move this file to 'Folder B' using Windows Explorer.                       According to the new theory, the file should inherit the permissions of 'Folder B' and therefore 'Group B' should appear in the ACL of 'test file 1.txt'. In the screenshot below the resulting permissions are displayed. The permissions from the originating location are kept, while the permissions of 'Folder B' should be inherited.                   Example 2 (Correct behavior): Again, consider the same two folders. This time we make a small modification to the ACL of 'Folder A'. We add 'Group C' to the ACL and again we create a file in 'Folder A' which we name 'test file 2.txt'.                    Next, we move 'test file 2.txt' to 'Folder B'.                       Again, we check the permissions of 'test file 2.txt' at the target location. We can now see that the permissions are inherited. This is what should be happening, and can be considered 'correct behavior' for Windows Vista / 2008 / 7 / 2008 R2. It remains uncertain why this behavior is so inconsistent. At this time, this is under investigation with Microsoft Support. The investigation has been going for the last two weeks and it is beginning to look like there is no rational reason for this, other than a bug in the Windows Explorer in Windows 7 and 2008 R2. As soon as there is any certainty on this, I will note it here in this blog.                   The examples above are harmless tests, by using my own laptop. If you would create the same set of folders and groups, and configure exactly the same permissions, you will see exactly the same behavior. Be sure to use Windows 7 or Windows 2008 R2.   Initially the problem arose at a customer site where move operations on data on the fileserver by users would result in unpredictable results. This resulted in the wrong set of people having àccess permissions on data that they should not have permissions to. Off course this is something we want to prevent at all costs.   I have also done several tests with move operations by using the move command in a cmd prompt. This way the behavior is always consistent. The inconsistent behavior is only exposed when using the Windows Explorer to initiate the move operation, and only when using Windows 7 or Windows 2008 R2 systems. It is evident that this behavior changes when the ACL of a folder has been changed, for example by adding an extra entry. The reason for this remains uncertain though. To be continued…. A dutch version of this post can be found at: http://blogs.platani.nl/?p=612

    Read the article

  • How to force certain traffic through GRE tunnel?

    - by wew
    Here's what I do. Server (public internet is 222.x.x.x): echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf sysctl -p iptunnel add gre1 mode gre local 222.x.x.x remote 115.x.x.x ttl 255 ip add add 192.168.168.1/30 dev gre1 ip link set gre1 up iptables -t nat -A POSTROUTING -s 192.168.168.0/30 -j SNAT --to-source 222.x.x.x iptables -t nat -A PREROUTING -d 222.x.x.x -j DNAT --to-destination 192.168.168.2 Client (public internet is 115.x.x.x): iptunnel add gre1 mode gre local 115.x.x.x remote 222.x.x.x ttl 255 ip add add 192.168.168.2/30 dev gre1 ip link set gre1 up echo '100 tunnel' >> /etc/iproute2/rt_tables ip rule add from 192.168.168.0/30 table tunnel ip route add default via 192.168.168.1 table tunnel Until here, all seems going right. But then 1st question, how to use GRE tunnel as a default route? Client computer is still using 115.x.x.x interface as default. 2nd question, how to force only ICMP traffic to go through tunnel, and everything else go default interface? I try doing this in client computer: ip rule add fwmark 200 table tunnel iptables -t mangle -A OUTPUT -p udp -j MARK --set-mark 200 But after doing this, my ping program will timeout (if I not doing 2 command above, and using ping -I gre1 ip instead, it will works). Later I want to do something else also, like only UDP port 53 through tunnel, etc. 3rd question, in client computer, I force one mysql program to listen on gre1 interface 192.168.168.2. In client computer, there's also one more public interface (IP 114.x.x.x)... How to forward traffic properly using iptables and route so mysql also respond a request coming from this 114.x.x.x public interface?

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >