Search Results

Search found 11674 results on 467 pages for 'adding'.

Page 330/467 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • Perfectly reproducable select statement default ordering issue....

    - by Dave
    Hi, I've recently been chasing an issue with a client's db... solution found, but impossible to recreate. Essentially, we're doing a Select * from mytable where ArbitraryColumn = 75 Where MyTable has an Identity column, called 'MyIndentityColumn' - incremented by one in each insert. Naturally, and normally I would assume that the order returned would be the order in which they are inserted (bad assumption, but one which was forced onto me, through an inherited application - which has been patched). Essentially, I would like suggestions as to why the database, when restored to my local machine (same OS, same SQL server version - 200 sp3) same collation, and same backup instance restored on it, as a test DB on the client site. When I perform the above select, I get them in order of insert (i.e. identity column ordered ascending). On the client, it seems random (but the same 'random' order each time)... A few other points: I have the same collation on my test server as client Same DB backup restored to a test only I can access Same SQL server version and service pack Same OS Test DB is a new DB - new log and MDF... I have the problem 'solved' by adding an explicit order by clause but I want to undertand the cause of the issue, given the exact nature of my attempts to recreate it beuing futile, and perfectly recreatable on the client server... Thanks in advance, Dave

    Read the article

  • Sharepoint Central Administration stuck / high CPU usage

    - by johnnyb10
    I'm using WSS 3 and I recently added a new web application to my SharePoint Server. After adding it, I wasn't able to open the Central Administration site. I also noticed that there was a w3wp.exe error (Event ID 1000) in the Event Viewer. The situation now is that the w3wp.exe process is hovering around 50% CPU usage continuously. I installed a program called IIS Peek, and it shows continuous GET requests on the Central Administration site; this happens even if I stop the Central Administration site in IIS. The IP addresses identified in the GET request is my workstation, which is what I used to attempt to access Central Administration after I created the new web application. Can someone explain what's going on and how I might fix it? It seems as if my computer tried to access Central Administration and then it hung, but the page requests that were happening at the time are somehow continuing over and over again. So my two problems are the inability to access Central Administration, and the CPU Usage of w3wp.exe, which I'm assuming are two symptoms of the same problem. I'd like to know if there's anything I can do besides restarting IIS, because we have clients accessing other sites on this server. Thanks.

    Read the article

  • How can I keep SSH's know_hosts up to date (semi-securely)?

    - by Chas. Owens
    Just to get this out in front so I am not told not to do this: The machines in question are all on a local network with little to no internet access (they aren't even well connected to the corporate network) Everyone who has the ability to setup a man-in-the-middle attack already has root on the machine The machines are reinstalled as part of QA procedures, so having new host keys is important (we need to see how the other machines react); I am only trying to make my machine nicer to use. I do a lot of reinstalls on machines which changes their host keys. This necessitates going into ~/.ssh/known_hosts on my machine and blowing away to old key and adding the new key. This is a massive pain in the tuckus, so I have started considering ways to automate this. I don't want to just blindly accept any host key, so patching OpenSSH to ignore host keys is out. I have considered creating a wrapper around the ssh command the will detect the error coming back from ssh and present me with a prompt to delete the old key or quit. I have also considered creating a daemon that would fetch the latest host key from a machine on a whitelist (there are about twenty machines that are being constantly reinstalled) and replace the old host key in known_hosts. How would you automate this process?

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • How Does EoR Design Work with Multi-tiered Data Center Topology

    - by S.C.
    I just did a ton of reading about the different multi-tier network topology options as outlined by Cisco, and now that I'm looking at the physical options (End of Row (EoR) vs Top of Rack(ToR)), I find myself confused about how these fit into the logical constructs. With ToR it also maps 1:1: at the top of each rack there is a switch(es) that essentially act as the access layer. They connect via fiber to other switches, maybe chassis-based, that act as the aggregation layer, that then connect to the core layer. With EoR it seems that the servers are connecting directly to the aggregation layer, skipping the access layer all together, by plugging directly into what are typically chassis switches. In EoR then is the standard 3-tier model now a 2-tier model: the servers go to the chassis switch which goes straight to the core switch? The reason it matters to me is that my understanding was that the 3-tier model was more desirable due to less complexity. The agg switch pair acts as default gateway and does routing; if you use up all of your ports in your agg layer pair it's much more complicated to add additional switches, than simply adding more switches at the access layer. Are there other downsides to this layout? Does this 3-tier architecture still apply in some way in EoR? Thanks.

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • Turn computer into DAS (Direct Attached Storage)

    - by Damon
    Can we build a direct attached storage by taking a computer/server, adding an HBA, and installing some appropriate software? We would use Debian as a host OS for both the DAS and the server. If so, what software do we use? And do we simply need a HBA for the DAS and the Server? Or do we need more hardware? The goal is to use an older server that does not have enough room for drives but does have ECC memory, server processors, redundant power supplies, dual nics, etc. Then find any boxes, server or not, the key being having enough room for 8-12 drives, fans, etc. and turning them into a DAS; build two of these DAS's and have them connected to the server. Eventually we want to have two servers using DRBD and associated services like heartbeat and pace maker to create an HA setup for our server(s) but that will take a long time to configure since I have no experience with anything related to DRBD (yet) and have a learning curve I have to get past, not to mention the additional cost of more hardware (two servers vs one).

    Read the article

  • Any dangers in using DDR memory with a higher frequency than the FSB?

    - by raw_noob
    I'm looking to upgrade memory in an older motherboard. The processor is an AMD Sempron 2500+ with a maximum speed of 333/166MHz. The motherboard is an MSI MS-7061 (KV3M-V), which accepts up to 2Gb of DDR memory maximum PC2700 in 2 slots and has a maximum FSB of 333MHz. The board does not have dual-channel support. Existing memory includes a stick of 512Mb PC3200, which seems to be running OK (presumably at PC2700) but is rated 200MHz, which is below the FSB speed. The other stick is 256Mb PC2100/133MHz, again below the FSB speed. (All figures from CPU-Z.) I have a chance to acquire a single used stick of PC3200/400MHz memory very cheaply. Crucial's system scanner seems to suggest that this will be OK with my system, but other sites have suggested that running memory with a higher frequency than the FSB can cause instability. Is this true? Would I be better waiting until I can buy the correct PC2700/333MHz stick? I'm assuming that the mixed memory I have at present is running as 768Mb at 133MHz. Is this a reasonable assumption? If so, would you expect the performance differences between 768Mb/133MHz and 1Gb/333MHz to be very noticeable? If I install the new 1Gb/400 or 333MHz stick in slot 1, am I right in thinking that adding back the existing 512Mb/200MHz stick in slot 2 would pull the whole 1.5Gb system memory speed down to 200MHz? If so, which would be better - 1.5Gb/200MHz, or the single 1Gb stick at the full 333MHz that the FSB permits? Is more headroom more important than extra speed? Any help - or even opinions - gratefully received. I can't find reliable information, and I can't afford to make expensive mistakes.

    Read the article

  • Routing for remote gateway over VPN in Vista/7 broken?

    - by Raymond
    Hi, Situation is as follows. Home computer running Windows 7, sets up VPN connection (LT2P + IPSec, "use remote gateway" disabled) to office. Subnet is 192.168.64.x Office has Draytek Vigor 2920 router, subnet is 192.168.32.x What happens? - VPN connection itself works fine - Can ping any machine on the remote network - When trying to open a webpage from a host in the remote network, the remote server logs the incoming request, but the browser hangs on "waiting for..." and eventually times out. I have observed this problem on Windows Vista and Windows 7. On Windows XP however there is no problem like described above. The only clue I have is that there is a difference in the routing between XP and Vista/7. The output of "route print" on Windows XP looks like this: (See www.latunyi.com/routing_xp.png) So here the gateway for the 192.168.32.x subnet is the IP address that the local computer has in the remote network. The output of "route print" on Windows 7 (and Windows Vista) looks like this: (See www.latunyi.com/routing_win7.png") Now the gateway for the 192.168.32.x subnet is the IP address of the VPN router (32.1). I don't know if that causes this trouble, but it seems a bit strange. Enabling "use default gateway on remote network" doesn't make a difference. Using the new option "Disable class based route addition" in Windows 7 only makes the route to the VPN router disappear. I am really puzzled here. I assume the VPN routing can't be broken in both Vista and Windows 7, and this should just work without manually adding routes. I hope someone has a solution for this problem :-). Thanks!

    Read the article

  • Windows-to-linux: Putty with SSH and private/public key pair

    - by Johnny Kauffman
    I spent about 3 hours trying to figure out how to connect to a linux box from my windows machine using putty without having to send the password. This is connecting to an Ubuntu server that is using OpenSSH. The private key is SSH-2 RSA, 1024 bits. I am connecting using SSH2. I have run into the more common problems already: Putty generated the public key in the "wrong format". I have corrected this (as seen on this blog post). However, since I am not yet connected, I cannot absolutely confirm that this file is in the correct format. The key is all on a single line now, and I have tried adding/removing line breaks at the end of the file. I've also tried the public file doctoring process a few times to ensure that I haven't flubbed up the manual conversion. Even so, I have no way to verify accuracy here. The permissions were at once point wrong as well, specifically meaning that the file had too many permissions. I had to solve this too and I know it got past this because I no longer see a related error in /var/log/auth.log. I've tried both authorized_keys and authorized_keys2 in case the server has an old version of OpenSSH, but this changed nothing. I do have access as a user. After this keyfile stuff fails, I can enter my password instead The only remaining nibble of information I have is that it claims I have the alleged password wrong: sshd[22288]: Failed password for zzzzzzz from zz.zz.zz.zz port 53620 ssh2 Even so, as far as I can tell, this is just a lazy try/catch somewhere, since I don't think there's a password involved at all. I see nothing else in any of the /var/log files of use. What else could be wrong?

    Read the article

  • Strange problem with Google Mail and IMAP on Outlook 2007

    - by Alex C.
    I work for a small non-profit organization. We have about 35 administrative employees who use e-mail. We're on a Windows network with a domain. Everyone is running XP Pro and Office 2007 with all updates/patches. We used to use POP3 mail through a local provider. However, we recently signed-up for a free Google Apps account, and we switched to IMAP mail through Google. Everyone uses Outlook 2007 as the client. For about ten days, everything was working fine. Yesterday afternoon, we suddenly developed a strange and annoying problem. Every time you send an e-mail message, a copy of your outgoing message shows up in your inbox. It's as if you're adding your own address to the CC: line of every message. Nothing has changed on our end. I was hoping that the problem was a temporary glitch that would resolve itself, but here we are about 24 hours later, and it's still happening. I searched Twitter, and there were a handful of vague messages about issues with Google mail and IMAP, but I didn't see any references to this specific problem. Any thoughts on what's going on here and how to fix it?

    Read the article

  • How do I configure VMware View location-based printing to use Active Directory Groups?

    - by Jason Pearce
    I am attempting to configure VMware View 4.5's Location-Based Printing, which leverages an included OEM version of ThinPrint, to assign printers to active directory groups. The location-based printing feature maps printers that are physically near client systems to VMware View desktops. I am using the Active Directory group policy setting AutoConnect Location-based Printing for VMware View, which is located in the Microsoft Group Policy Object Editor in the Software Settings folder under Computer Configuration. The AutoConnect Location-based Printing for VMware View appearst to be just a name translation table. It permits me to assign a specific printer or printers to an IP Range, Client Name, Mac Address, User, or User Group. I'm attempting to assign printers to active directory user groups. I have created a new active directory group for each printer that I intend to use in VMware View desktop pools. I will then assign active directory users to the active directory groups that represent each network printer. Example: doej is a member of the PTR-FLOOR2-NORTH-ROOM255 active directory group. Using AutoConnect, I assigned the group to receive a network printer by adding PTR-FLOOR2-NORTH-ROOM255 in the User/Group column. Problem: When doej logs in to his VDI session, the printer is not present. However, if I use a wildcard "*" in the User/Group column instead of the specific PTR-FLOOR2-NORTH-ROOM255 active directory group, the printer is present and functions as designed. Alternatives: I have tried assigning printers to active directory groups within AutoConnect in the following ways, all unsuccesfull: PTR-FLOOR2-NORTH-ROOM255 domainexample\PTR-FLOOR2-NORTH-ROOM255 domainexample.local\PTR-FLOOR2-NORTH-ROOM255 Confirmation: The information used to map the printer to the VMware View desktop is stored in a registry entry on the View desktop in HKEY_LOCAL_MACHINE\SOFTWARE\Policies\thinprint\tpautoconnect. For each of these examples, I have reviewed the registry entry and can confirm that the desktop is receiving the information from the AutoConnect translation table. Summary: Can anyone provide an example of how to configure VMware View 4.5's Location-Based Printing so that I may assign network printers to active directory groups via the included AutoConnect tool? I would welcome a clear example of a working configuration. Thank you.

    Read the article

  • How to install/configure ffmpeg to compress mp4 videos for flash player delivery?

    - by Andrew Fulton
    We have a flash web-app that created interactive video, and are using ffmpeg to do some compression/resizing when a user "publishes" their project. The user can upload flv files and mp4 files, both of which play fine in the Flash UI before publishing. After publishing the flv files work fine, but the mp4 files will not play in the flash player: Audio will play but video won't. The mp4 files will play fine if I download them and play them in the Quicktime player but if I attempt to open them in the Adobe Media Player it reports "The media file does not contain a supported video track". If I open the Movie inspector in quicktime it tells me that the original file is an "h264" video and the ffmpeg-processed ones are "mpeg-4". I have tried forcing it to h264 by adding flags like -f h264 and -vcodec h264 but I get a screenfull of errors (no frame, illegal POC type, sps_id out of range) ending with Could not find codec parameters (Video: h264) h264 will show up if I run ffmpeg -formats and ffmpeg -codecs, and as I said it will play fine in Quicktime. Is there anything else I need to do to convince the flash player to play them? Is there anything else I need to tell you about the server that will help?

    Read the article

  • Unable to connect to second name of Windows 2008 Server R2 machine from XP

    - by Tumba
    I used the command netdom computername /add:newname.domainname.com to add a second name to a server running Windows 2008 Server R2. After restarting the server, I had DNS "A" entries for both names. In addition, the second name was added to HKLM\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters\OptionalNames, which I believe should have taken care of any NetBIOS resolution. From my Windows 7 workstation, I can ping both names and running net view on both names reveals the same list of resources. From Windows XP, I can ping both names, but net view only works on the first name. Running net view on the second name returns: System error 52 has occurred. You were not connected because a duplicate name exists on the network. Go to System in Control Panel to change the computer name and try again. What do I need to do to make the second name usable from XP clients? Update: I was able to resolve the problem by adding the REG_DWORD key DisableStrictNameChecking = 1 to HKLM\SYSTEM\CurrentControlSet\services\LanmanServer\Parameters, then restarting the Server service. However, I do not understand why this was necessary.

    Read the article

  • Backup files from Linux client to Windows Server

    - by Andrew
    I'm trying to backup my files from my Linux box to my Windows Server 2008 as a push, and when I delete them from my Linux box, they remain on my Windows Server. I've found lots of sources that are similar, but most results were from Windows to Linux. I managed to find slightly more similar cases like Using rsync and cygwin to Sync Files from a Linux Server to a Windows Notebook PC, and rsync from Windows PC to remote Linux server, with the most similar being a backup from Linux to Windows Server, but through a pull from the Windows Server. Initially, I used Unison because I thought having the 2-way capability would come in handy, and I would just have to set some configurations to make it 1-way. Unfortunately, I couldn't find the right configuration, and only managed to synchronize using the command unison "profile" -ui text -auto -silent. When I deleted the files on my Linux box, the files in the Server got deleted too, which of course, isn't what I want. When I tried to find any options for Unison, I only discovered the -force option, which didn't help, since what I wanted was an incremental update to the Server. I found out I could achieve this from using rsync and the -a option (archive), which would keep adding files even if I deleted them from my Linux box. I installed Cygwin on my Windows Server, configured an SSH daemon, but I can't seem to get it working. I've also already configured Windows Firewall to open port 22 (both inbound and outbound). I used the following command from my Linux box: rsync -avrzn /folder/to/be/backed/up/ [email protected]:/cygdrive/c/place/to/store/backed/up/files (a - archive, v - verbose, r - recurse into subdirectories, z - compress, n - dryrun) but it just won't work. Can anyone help me out? I don't mind using either Unison or rsync, as long as it achieves what I want.

    Read the article

  • Removing Paths/ Landing Pages From SharePoint Search Results

    - by j.strugnell
    Hi there, We've been asked by a client to remove a number of pages from being shown up in their public website search results page. I've been into the SSP and created Crawl Rules to remove these pages. All seemed to have worked ok but we have an issue in that landing pages are still showing up in their "www.domain.com/sitearea/" form but not in their "www.domain.com/sitearea/pages/default.aspx". For each of this type of page we have created one rule to "Exclude" the "aspx" path and another rule to include the "/" path but to "Follow links on the URL without crawling the URL itself". We tried adding rules to exclude the "/" format but that only resulted in all results underneath that being excluded. Does anybody know how to remove the "area/pages/default.aspx" and the "area/" pats from Search Results? I'm not sure if it's the "done thing" to ask 2 questions in one but this is in a similar vein so it should be ok. I was wondering if anyone knew of a tool (or if it is possible) to allow site admins to exclude pages from search results (not via SSP/Crawl Rules). I know they can do it at the site level but I was wondering if anything out there enabled this to be done at the page level through either Page or Site Settings?

    Read the article

  • Dell PowerEdge T710, add a new hard disk, how to?

    - by user1340802
    I need to add a new hard disk to a PowerEdge T710 running on Vmware EXSI 4. this hard disk is a 'normal' desktop hard disk 1TB (that is it is not coming from Dell, I also have no rack for it to plug it inside any of the front bay) I would like to add this disk for a virtual machine needing space, the most easily as possible. I have find that there is an avaiable sata cable with its electric power, so may I just add the disk plugging these and using the empty 5"1/4 slot available under the CD drive (with a 5"1/4 - 3"1/2 bay adaptater) ? (even if this way it seems that i bypass the raid controller that own the front bay with racks)) that way i think could be easier than adding the disk to the already defined Raid (btw i am also not sure on how to do these but i would not risk to mess the already working things) what are the other operations that i would have to do to ? (sorry I am a real beginner on Vmware EXSI and PowerEdge management :/ i have seen that there is some management from Bios (CTRL+R as start up) so that the disk will be seen or initialize it. I am really not sure of the steps needed...) thank you, best.

    Read the article

  • VCL - configuration for Magento and Varnish 3.0.2

    - by Tomas
    I would like to kindly ask if there's someone who can help me configure Varnish for Magento to reach far more hits. My current ratio from varnishstat is: cache_hit=271 cache_miss=926 I'm kindly asking this because I've googled almost every site related to this theme, but 99.9% of configurations don't work because of outdated code. Details of my set-up: I use Varnish on port 80, Apache on port 81, PageCache as Magento varnish module, APC for PHP speed and Memcached for dynamic caching. Load speed is about 1.5s on home-page (Pingdom.com average results) USA ping & 2.5s Europe. Servers are located in Toronto, Canada. EDIT: This is my full VCL configuration http://pastebin.com/885BzHCs (I just use xxx.xxx.xxx.xxx for my IPs) This is the info from the command (varnishtop -i TxHeader -I Cookie): TxHeader Cookie: frontend=965b5...(*lots of numbers); adminhtml=3ae65...(*lots of numbers); EXTERNAL_NO_CACHE=1 "(*lots of numbers)" is just my adding to the info Any idea how to avoid Varnish hitting this cookies? (If I got correctly the idea about avoiding Vanrish hitting the cookie and not caching the home page). Thank you for any help!

    Read the article

  • grub2 error : out of disk

    - by Nostradamnit
    Hi everyone, I recently install ArchBang on a machine with Ubuntu and XP. I ran update-grub from Ubuntu and it found the new install and created an entry. However, when I try to boot it, I get: error: out of disk error: you need to load kernel first I've tried several things, including adding a new entry in 40_custom, but nothing changes. Here are the entries I have: default found by update-grub ### BEGIN /etc/grub.d/30_os-prober ### menuentry "ArchBang Linux (on /dev/sda4)" { insmod part_msdos insmod ext2 set root='(hd0,msdos4)' search --no-floppy --fs-uuid --set 75f96b44-3a8f-4727-9959-d669b9244f2a linux /boot/vmlinuz26 root=/dev/sda4 rootfstype=ext4 ro xorg=vesa quiet nomodeset swapon initrd /boot/kernel26.img } menuentry "ArchBang Linux Fallback (on /dev/sda4)" { insmod part_msdos insmod ext2 set root='(hd0,msdos4)' search --no-floppy --fs-uuid --set 75f96b44-3a8f-4727-9959-d669b9244f2a linux /boot/vmlinuz26 root=/dev/sda4 rootfstype=ext4 ro xorg=vesa quiet nomodeset swapon initrd /boot/kernel26-fallback.img } ### END /etc/grub.d/30_os-prober ### custom entry in 40_custom based on various ideas found on the internets menuentry "ArchBang Linux (on /dev/sda4)" { insmod part_msdos insmod ext2 set root='(hd0,msdos4)' search --no-floppy --fs-uuid --set 75f96b44-3a8f-4727-9959-d669b9244f2a linux /boot/vmlinuz26 root=/dev/disk/by-uuid/75f96b44-3a8f-4727-9959-d669b9244f2a rootfstype=ext4 ro xorg=vesa quiet nomodeset swapon initrd /boot/kernel26.img } I think the problem has something to do with the sda4 not being mounted at boot-time... Thanks in advance for you help, Sam

    Read the article

  • What's wrong with my .htaccess? Trying to simplify actual code

    - by AlexV
    This is my actual .htaccess: #If the requested URI does not end with an extension RewriteCond %{REQUEST_URI} !\.(.*) #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] #If the requested URI ends with .php* RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Since all conditions are compatibles except the 1st ones (no extension and *.php* match) all I should have to do is to add the [OR] condition to these 2 lines, but when I'm adding it it's not working (my no extension rule don't work anymore). This is my new (not working) code: #If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #If the requested file is not seo-urls-mapper.php (avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!seo-urls-mapper)\.php.*$ #If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/(excluded1|excluded2)/ #Then serve the URI via the mapper RewriteRule .* /seo-urls/seo-urls-mapper.php?uri=%{REQUEST_URI} [L,QSA] Hopefully someone will be able to clarify this issue... I guess I don't fully understand the use of [OR]. Thanks!

    Read the article

  • daily rsync backups with hard links, checksums, and a new computer

    - by user75058
    I backup my laptop to a Fedora desktop daily using rsync with hard links. This has worked great for almost a year. I recently purchased a new computer, transferred over my data, and would like to continue backing up this computer daily. However, due to the data transfer from the old laptop to the new laptop, the timestamps have obviously changed, and will thus cause my daily rsync backup to re-transfer all of the data. I thought that by adding the -c (checksum) switch to my rsync backup it would match files based on checksum, instead of timestamp and size, and only transfer those files that are different or not present. This appeared to work, but upon examining the new backup, hard links are not being created, and it appears the files that should be hard linked are simply being copied to the new backup directory from the previous backup directory on the backup server. This is very peculiar behavior to me, and I am having trouble figuring out why this is occurring. Checksums match for files that I think should be hard linked. I have looked through the rsync man page and Google'd around a bit and have been unable to find anything for me to better understand this behavior.

    Read the article

  • Random server lag, no CPU/mem/pagefile usage

    - by Kev
    We have a fairly new server running Windows 2003 SP2, and the past few days we've noticed random slowdowns. When I'm logged into the server over remote desktop while this is happening, or if I'm physically sitting at the server logged in, suddenly everything becomes extremely laggy. Any UI element I try to interact with takes upwards of ten seconds to react, and then responds very slowly. Then a minute later everything is quite snappy again. During this, I have Task Manager minimized to the tray, and there's no CPU usage. I open it up right after this happens, and there's very little CPU usage on the graph, and no memory or pagefile usage above normal. (Normal being 1.5 GB free in the case of memory.) This is what I see logged into the server, and then users start calling saying things are slow, timing out, and failing--anything to do with our server. No events in the Event Viewer around the times this happens. The context I'm working in (last thing I clicked, etc.) seems different every time--different programs active, different combinations of programs open. Never anything particularly stressful (like adding an event entry to a Cobian Backup configuration, or editing text in TextPad, which has been exceptionally stable in my extensive usage of it.) I would've thought it was just the server, but a family member's home PC (entirely separate) running WinXPSP3 had the same thing happen to it last night a few times. Is this some new behaviour introduced by the latest Windows Updates? Either way, where do I even start to look when nothing seems to be chewing up resources?

    Read the article

  • Configure samba server for Unix group

    - by Bird Jaguar IV
    I'm trying to set up a samba server with access for users in the Linux (RHEL 6) "wheel" group. I am basing smb.conf off of the example here where it goes through the [accounting] example. In my smb.conf I have [tmp] comment = temporary files path = /var/share valid users = @wheel read only = No create mask = 0664 directory mask = 02777 max connections = 0 (rest of the output from $ testparm /etc/samba/smb.conf is here). And groups `whoami` returns user01 : wheel. When I use the following command from another machine (Mac OS) as the Linux user (user01): $ smbclient -L NETBIOSNAME/tmp it asks for a password, I hit return without a password, and get: Enter user01's password: Anonymous login successful Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.6.9-151.el6_4.1] Sharename Type Comment --------- ---- ------- tmp Disk temporary files IPC$ IPC IPC Service (Samba Server Version 3.6.9-151.el6_4.1) But when I try $ smbclient //NETBIOSNAME/tmp I try entering the password I use for the Linux login, and get a bunch of stuff logged, including check_sam_security: Couldn't find user 'user01' in passdb. ... session setup failed: NT_STATUS_LOGON_FAILURE (I can give more logging information if it would be helpful.) I can't find a reference to more steps I need to add group users in the resource. Should I be manually adding samba users from the group somehow? Thank you

    Read the article

  • Removing trailing slashes in WordPress blog hosted on IIS

    - by Zishan
    I have a WordPress blog hosted in my IIS virtual directory that has all URLs ending with a forward slash. For example: http://www.example.com/blog/ I have the following rules defined in my web.config: <rule name="wordpress" patternSyntax="Wildcard"> <match url="*" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="index.php" /> </rule> <rule name="Redirect-domain-to-www" patternSyntax="Wildcard" stopProcessing="true"> <match url="*" /> <conditions> <add input="{HTTP_HOST}" pattern="example.com" /> </conditions> <action type="Redirect" url="http://www.example.com/blog/{R:0}" /> </rule> In addition, I tried adding the following rule for removing trailing slashes: <rule name="Remove trailing slash" stopProcessing="true"> <match url="(.*)/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="{R:1}" /> </rule> It seems that the last rule doesn't work at all. Anyone around here who has attempted to remove trailing slashes from WordPress blogs hosted on IIS?

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >