Search Results

Search found 13645 results on 546 pages for 'bugz r us'.

Page 414/546 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • Trouble getting latest version of Git

    - by TheMethod
    I am using Ubuntu 10.04 LTS. I'm looking at using git as source control for personal projects and Github as a remote repository. I was having trouble pushing a commit to my remote github repo getting the following error message: The requested URL returned error: 403 while accessing https://github.com/Jstall/helloworld.git/info/refs When I did some digging I found that the problem could be me not having the latest version of Git. When I did a --version I found that I have version 1.7.0.4 locally. So I tried to update git using: sudo apt-get install git but get the following error: Reading package lists... Done Building dependency tree Reading state information... Done Package git is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package git has no installation candidate I've tried running: sudo apt-get update and trying again but it didn't seem to make a difference. I'm not sure if it's relevant but I'm also getting a couple of 404's when I run update: Err http://wine.budgetdedicated.com edgy/main Packages 404 Not Found Fetched 4,117B in 0s (5,142B/s) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/edgy/universe/binary-i386/Packages.gz 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://wine.budgetdedicated.com/apt/dists/edgy/main/binary-i386/Packages.gz 404 Not Found I'm not sure when I should try next. Could anyone suggest a course of action to get this resolved? Any advice would be appreciated. Thanks much!

    Read the article

  • grep command is not search the complete pattern

    - by Sumit Vedi
    0 down vote favorite I am facing a problem while using the grep command in shell script. Actually I have one file (PCF_STARHUB_20130625_1) which contain below records. SH_5.55916.00.00.100029_20130601_0001_NUC.csv.gz|438|3556691115 SH_5.55916.00.00.100029_20130601_0001_Summary.csv.gz|275|3919504621 SH_5.55916.00.00.100029_20130601_0001_UI.csv.gz|226|593316831 SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_NUC.csv.gz|368|3553014997 SH_5.55916.00.00.100038_20130601_0001_Summary.csv.gz|276|2625719449 SH_5.55916.00.00.100038_20130601_0001_UI.csv.gz|226|3825232121 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 SH_5.75470.00.00.100015_20130601_0001_NUC.csv.gz|425|1627227450 And I have a pattern which is stored in one variable (INPUT_FILE_T), and want to search the pattern from the file (PCF_STARHUB_20130625_1). For that I have used below command INPUT_FILE_T="SH?*???????????????US.*" grep ${INPUT_FILE_T} PCF_STARHUB_20130625_1 The output of above command is coming as below PCF_STARHUB_20130625_1:SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 I have two problem in the output, first is, only one entry is showing in output (It should contain two entries) and second problem is, output contains "PCF_STARHUB_20130625_1:" which should not be came. output should come like below SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 Is there any technique except grep please let me know. Please help me on this issue.

    Read the article

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • Unattended Kickstart Install

    - by Eric
    I've looked around quite a bit and have seen similar setup and questions, but none seem to work for me. I'm using the following command to create a custom ISO: /usr/bin/livecd-creator --config=/usr/share/livecd-tools/test.ks --fslabel=TestAppliance --cache=/var/cache/live This works great and it creates the ISO with all of the packages and configs I want on it. My issue is that I want the install to be unattended. However, every time I start the CD, it asks for all of the info such as keyboard, time zone, root password, etc. These are my basic settings I have in my kickstart script prior to the packages section. cdrom install autopart autostep xconfig --startxonboot rootpw testpassword lang en_US.UTF-8 keyboard us timezone --utc America/New_York auth --useshadow --enablemd5 selinux --disabled services --enabled=iptables,rsyslog,sshd,ntpd,NetworkManager,network --disabled=sendmail,cups,firstboot,ip6tables clearpart --all So after looking around, I was told that I need to modify my isolinux.cfg file to either do "ks=http://X.X.X.X/location/to/test.ks" or "ks=cdrom:/test.ks". I've tried both methods and it still forces me to go through the install process. When I tail the apache logs on the server, I see that the ISO never even tries to get the file. Below are the exact syntax I'm trying on my isolinux.cfg file. label http menu label HTTP kernel vmlinuz0 append initrd=initrd0.img ks=http://192.168.56.101/files/test.ks ksdevice=eth0 label localks menu label LocalKS kernel vmlinuz0 append initrd=initrd0.img ks=cdrom:/test.ks label install0 menu label Install kernel vmlinuz0 append initrd=initrd0.img root=live:CDLABEL=PerimeterAppliance rootfstype=auto ro liveimg liveinst noswap rd_NO_LUKS rd_NO_MD rd_NO_DM menu default EOF_boot_menu The first 2 give me a "dracut: fatal: no or empty root=" error until I give it a root= option and then it just skips the kickstart completely. The last one is my default option that works fine, but just requires a lot of user input. Any help would be greatly appreciated.

    Read the article

  • Why isn't 'Low Fragmentation Heap' LFH enabled by default on Windows Server 2003?

    - by James Wiseman
    I've been investigating an issue with a production Classic ASP website running on IIS6 which seems indicative of memory fragmentation. One of the suggestions of how to ameliorate this came from Stackoverflow: How can I find why some classic asp pages randomly take a real long time to execute?. It suggested flipping a setting in the site's global.asa file to 'turn on' Low Fragmentation Heap (LFH). The following code (with a registered version of the accompanying DLL) did the trick. Set LFHObj=CreateObject("TURNONLFH.ObjTurnOnLFH") LFHObj.TurnOnLFH() application("TurnOnLFHResult")=CStr(LFHObj.TurnOnLFHResult) (Really the code isn't that important to the question). An author of a linked post reported a seemingly magic resolution to this issue, and, reading around a little more, I discovered that this setting is enabled by default on Windows Server 2008. So, naturally, this left me a little concerned: Why is this setting not enabled by default on 2003, or If it works in 2008 why have Microsoft not issued a patch to enable it by default on 2003? I suspect the answer to the above is the same for both (if there is one). Obviously, we're testing it in a non-production environment, and doing an array of metrics and comparisons to deem if it does help us. But aside from this I'm really just trying to understand if there's any technical reason why we should do this, or if there are any gotchas that we need to be aware of.

    Read the article

  • PTR record not valid for all domains

    - by charnley
    We have an issue sending emails to certain domains, namely Time Warner and Cox. Last week, we decommissioned our Exchange 2003 server and now our Exchange 2010 server is doing all of the transport for our domain. We run our own authoritative name servers, so we are in charge of the DNS and have modified our PTR record to reflect the new server. All mailflow is working except for these 2 domains. When I telnet on port 25 to the mail servers for Cox and Time Warner I am receiving errors. For Cox the error is: 554... rejected - no rDNS And when I telnet to port 25 to the Time Warner mail server we get this: 554 5.7.1 - Connection refused. IP name lookup failed for x.x.x.x I have run through the outbound SMTP test on Microsoft Remote Connectivity Analyzer and get 100% completely successful results. MXToolbox comes up with all successful tests on SMTP as well, showing correct reverse banner check, and no blacklisting. DNSQueries.com shows a valid reverse DNS entry as well for us. Outbound emails to these 2 domains continue to sit in the queue. Any ideas or advice would be greatly appreciated. Thanks!

    Read the article

  • Good visuals supporting adopting Macintosh in a Windows company

    - by jdmuys
    I work in a Windows only software service company, which just put up an internal contest for innovative ideas for the company. The idea I submitted is to let employees use a Mac instead of the mandatory PC if they wished to. My idea has been selected (among a few others) to reach the next stage of the contest. One of the items requested for the next stage is ONE visual that best illustrates the idea. While my pitch is rather good (I think), I have a hard time coming up with ONE visual that would be suggestive enough and not too fanboy-ish, or too restricted. That's why I am requesting suggestions. For reference, some of the points I intend to develop are (not in order): de facto safety (little or no malware) Apple as a company reached its leading position through innovation (bio)diversity is a source of value for a service company, that expands its reach. it makes financial sense the Mac is the most compatible machine, making it a lot easier to test our software (especially web sites). Some OS X technologies can be valuable to a software service company (eg Applescript) Some Apple tools can help us improve (eg Keynote) It's good citizenship for our company as Apple is now best in class according to Greenpeace. I realize this question may be out of topic here. I'd be happy to have suggestions on where to post this question. Please do not argue why OS X might be better or worse than Windows. My question is very narrow. Thanks.

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • Private IP getting routed over Internet

    - by WernerCD
    We are setting up an internal program, on an internal server that uses the private 172.30.x.x subnet... when we ping the address 172.30.138.2, it routes across the internet: C:\>tracert 172.30.138.2 Tracing route to 172.30.138.2 over a maximum of 30 hops 1 6 ms 1 ms 1 ms xxxx.xxxxxxxxxxxxxxx.org [192.168.28.1] 2 * * * Request timed out. 3 12 ms 13 ms 9 ms xxxxxxxxxxx.xxxxxx.xx.xxx.xxxxxxx.net [68.85.xx.xx] 4 15 ms 11 ms 55 ms te-7-3-ar01.salisbury.md.bad.comcast.net [68.87.xx.xx] 5 13 ms 14 ms 18 ms xe-11-0-3-0-ar04.capitolhghts.md.bad.comcast.net [68.85.xx.xx] 6 19 ms 18 ms 14 ms te-1-0-0-4-cr01.denver.co.ibone.comcast.net [68.86.xx.xx] 7 28 ms 30 ms 30 ms pos-4-12-0-0-cr01.atlanta.ga.ibone.comcast.net [68.86.xx.xx] 8 30 ms 43 ms 30 ms 68.86.xx.xx 9 30 ms 29 ms 31 ms 172.30.138.2 Trace complete. This has a number of us confused. If we had a VPN setup, it wouldn't show up as being routed across the internet. If it hit an internet server, Private IP's (such as 192.168) shouldn't get routed. What would let a private IP address get routed across servers? would the fact that it's all comcast mean that they have their routers setup wrong?

    Read the article

  • How to debug silent hang on shutdown of Solaris 10?

    - by jblaine
    We're experiencing a mysterious hang on shutdown of a newly-imaged Oracle/Sun Solaris 10 SPARC box. It is repeatable (in the same spot ... from what we can tell). We let it try to work itself out multiple times for 5-10 minutes and it never progressed. I've never seen this happen before. The last thing displayed on the console is that syslogd was sent signal 15. Prior to us disabling snmpdx on the box, the last thing on the console was that snmpdx was sent signal 15 (after syslogd was sent signal 15). While very rare to find, in Solaris days past, I'd have a better idea from experience where the problem might be, and could then narrow things down further with silly (but effective) debugging echo statments in /etc/*.d scripts. With SMF in the picture, I'm not really quite sure where to start. We forced a crash dump via sync at the {ok} prompt for later analysis, and then let the box come up because it's a production server and our scheduled outage window was closing. /var/adm/messages shows nothing of use. How would you debug this situation? mdb ps of the savecore shows the following processes were running at hang time (afsd is the OpenAFS client and that many are expected): > > ::ps S PID PPID PGID SID UID FLAGS ADDR NAME R 0 0 0 0 0 0x00000001 00000000018387c0 sched R 108 0 0 0 0 0x00020001 00000600110fe010 zpool-silmaril-p R 3 0 0 0 0 0x00020001 0000060010b29848 fsflush R 2 0 0 0 0 0x00020001 0000060010b2a468 pageout R 1 0 0 0 0 0x4a024000 0000060010b2b088 init R 1327 1 1327 329 0 0x4a024002 00000600176ab0c0 reboot R 747 1 7 7 0 0x42020001 0000060017f9d0e0 afsd R 749 1 7 7 0 0x42020001 00000600180104d0 afsd R 752 1 7 7 0 0x42020001 0000060017cb44b8 afsd R 754 1 7 7 0 0x42020001 0000060017fc8068 afsd R 756 1 7 7 0 0x42020001 0000060017fcb0e8 afsd R 760 1 7 7 0 0x42020001 00000600177f4048 afsd R 762 1 7 7 0 0x42020001 000006001800f8b0 afsd R 764 1 7 7 0 0x42020001 000006001800ec90 afsd R 378 1 378 378 0 0x42020000 0000060013aee480 inetd R 7 1 7 7 0 0x42020000 0000060010b28008 svc.startd R 329 7 329 329 0 0x4a024000 00000600110ff850 sh Z 317 7 317 317 0 0x4a014002 0000060013b3a490 sac

    Read the article

  • Trying to connect simple VB6 ADO to SQL Server 2008

    - by Henry
    We have a VB6 App that -for current purposes - is very basic ADO: Dim a New ADODB.Recorset, set some basic properties like Cursor Location and Lock Type, set a Connection String and a .Source like "Select * from CustomerMaster", and .OPEN - nothing fancy here! Yet, on a new SBS installation with SQL Server 2008 across 2 Servers (one for Apps, the other dedicated to SQL!), it dies/hangs/crashes if you try to run such a Query from anything but the SQL Sever box. Initially, we were using the SQLOLEDB.1 driver, which would crash/hang the entire SQL Server after about 4 such queries (built a simple 6 line App just for this purpose). Then switched to the NATIVE SQL driver, which did allow us unlimited, happy queries - until you did the first Change/Update - THEN it would corrupt the SQL Server if you exited and tried to go back in. All this 'corruption' is happening from the 'App Server' of the SBS pair, and I presume that the App Server (also installed in tandem with the SQL server this week) has the latest MDACs, etc. And running it from a 'lowly XP workstation' is (obviously) no better. ANY ideas??? -Henry

    Read the article

  • How to detect/list rogue computers connected to a WIFI network without access to the Wifi Router interface?

    - by JJarava
    This is what I believe to be an interesting challenge :) A relative (that leaves a bit too far to go there in person) is complaining that their WIFI/Internet network performance has gone down abysmally lately. She'd like to know if some of the neighbors are using her wifi network to access the internet but she's not too technically savvy. I know that the best way to prevent issues would be to change the Router password, but it's a bit of a PITA having to re-configure all wifi devices... and if the uninvited guest broke the password once, they can do it again... Her wifi router/internet connection is provided by the telco, and remotely managed so she can log-on to their telco account's page and remotely change the router's Wifi password, but doesn't have access to the router status page/config/etc unless she opts out of the telco's remote support and mainteinance service... So, how could she check if there are guests in the wifi with this restrictions and in the most "point and click way"? In this case I'd probably use nmap to look for other devices in the network, but I'm not sure if that's the easiest way to do it. I'm not a wifi expert, so I don't know if there are any wifi-scanning utils that can tell us who's talking to the router... Lastly, she's a Windows user as I guess that'll influence the choice of tools available Any suggestions more than welcome Regards!

    Read the article

  • Squid3 not caching simple request and response

    - by Nick Spacek
    Hi folks, I've pared down my squid.conf to try to figure this out: http_port 80 accel defaultsite=host.to.cache cache_peer ip.to.cache parent 80 0 no-query originserver acl our_sites dstdomain host.to.cache http_access allow our_sites refresh_pattern . 1 20% 4320 Requests are being proxied correctly, so that's a start. Here's a request: GET http://host.to.cache/path?some_param=true Accept: */* Accept-Charset: ISO-8859-1,utf-8 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en Connection: keep-alive Host: host.to.cache User-Agent: myuseragent And the response: Connection: keep-alive Content-Length: 585 Content-Type: application/xml Date: Thu, 06 Jan 2011 18:33:11 GMT Via: 1.0 localhost (squid/3.0.STABLE19) X-Cache: MISS from localhost X-Cache-Lookup: MISS from localhost:80 The response has no caching-related headers, but I thought that refresh_pattern would set a default behavior for responses without caching-related headers. For my test, I wanted to cache everything for one minute at minimum. Am I missing something obvious? I did take a peek at this question: Squid isn't caching ...and ran through the page here: http://www.mnot.net/cache_docs/ briefly, but didn't see anything relevant (not to say that there isn't, I could have missed something). Thanks for any help.

    Read the article

  • Juniper’s Network Connect ncsvc on Linux: “host checker failed, error 10”

    - by hfs
    I’m trying to log in to a Juniper VPN with Network Connect from a headless Linux client. I followed the instructions and used the script from http://mad-scientist.us/juniper.html. When running the script with --nogui switch the command that gets finally executed is $HOME/.juniper_networks/network_connect/ncsvc -h HOST -u USER -r REALM -f $HOME/.vpn.default.crt. I get asked for the password, a line “Connecting to…” is printed but then the programm silently stops. When adding -L 5 (most verbose logging) to the command line, these are the last messages printed to the log: dsclient.info state: kStateCacheCleaner (dsclient.cpp:280) dsclient.info --> POST /dana-na/cc/ccupdate.cgi (authenticate.cpp:162) http_connection.para Entering state_start_connection (http_connection.cpp:282) http_connection.para Entering state_continue_connection (http_connection.cpp:299) http_connection.para Entering state_ssl_connect (http_connection.cpp:468) dsssl.para SSL connect ssl=0x833e568/sd=4 connection using cipher RC4-MD5 (DSSSLSock.cpp:656) http_connection.para Returning DSHTTP_COMPLETE from state_ssl_connect (http_connection.cpp:476) DSHttp.debug state_reading_response_body - copying 0 buffered bytes (http_requester.cpp:800) DSHttp.debug state_reading_response_body - recv'd 0 bytes data (http_requester.cpp:833) dsclient.info <-- 200 (authenticate.cpp:194) dsclient.error state host checker failed, error 10 (dsclient.cpp:282) ncapp.error Failed to authenticate with IVE. Error 10 (ncsvc.cpp:197) dsncuiapi.para DsNcUiApi::~DsNcUiApi (dsncuiapi.cpp:72) What does host checker failed mean? How can I find out what it tried to check and what failed? The HostChecker Configuration Guide mentions that a $HOME/.juniper_networks/tncc.jar gets installed on Linux, but my installation contains no such file. From that I concluded that HostChecker is disabled for my VPN on Linux? Are the POST to /dana-na/cc/ccupdate.cgi and “host checker failed” connected or independent? By running the connection over a SSL proxy I found out that the POST data is status=NOTOK (Funny side note: the client of the oh-so-secure VPN does not validate the server’s SSL certificate, so is wide open to MITM attacks…). So it seems that it’s the client that closes the connection and not the server.

    Read the article

  • Comprehensive solution for managing patches, event viewing, change management, inventory, etc

    - by Holocryptic
    I'm looking for a solution that incorporates most or all of the following: Patch Management, Server event viewing/tracking, AD change management, ticketing and internal/external kb, remote access - ability to shadow user sessions or create new ones, imaging, and inventory. Our environments contains Windows Servers and ESXi Hosts (We're not completely virtual, but we're moving that direction). Various Cisco and Linksys switches and firewalls. This is a tall order, and I don't know if it can be done on a reasonable budget. I've looked and found some questions on SF that deal with some of this: http://serverfault.com/questions/72015/active-directory-management-tools-for-medium-sized-forest-less-than-1000-users http://serverfault.com/questions/4021/are-there-any-tools-to-do-change-management-with-active-directory-group-policy http://serverfault.com/questions/21752/what-is-a-good-patch-update-management-server What I'm ideally looking for is a reasonably cheap solution that integrates the features into a central interface. We're a non-profit, so money is a limiting factor (the cheaper, the better; but we have a max of $15k). What we are trying to avoid is having to deal with multiple vendors, while maintaining scalability (we're creating more sites that we'll have to manage). Is this possible, or will we have to cobble together something to make it work for us?

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Is there a way in Windows 7 to disable "journaling"?

    - by Psycogeek
    C:\$extend\$Usn.Jrnl:$J:$data Here is a picture finally. The large strip in the center of the top band is the largest chunk, in the other, grey areas are the various clusters with it. On the right, the big long grey line is $logfile (not paging), and it is 63&nbsb;MB. Paging, 500&nbsb;MB is the dark cyan chunk, next to the yellow MFTres in the inner rings.. The disk was defragged so they could be seen easier. Not all clusters of this type of file are tagged, but the idea is there. The disk is 4k clusters, now about 12 GB size. Each cute little block in the picture is .81 MB and represents 207 clusters. The dkGreen section, is mostly the whole Winsxs pile, also interesting when they keep telling us it doesn't take much disk space. Wikipedia suggests that in previous NT systems "USN journaling" would be turned on when enabled (assumes it could also be turned off?). What aspects, services, or program is working on putting that stuff all over the disk which is known by $jrnl$ type clusters, even if it is not actual USN journaling? Is it possible in a Windows 7 system to completly disable the journaling, and what would be the ramifications of that? On a Windows XP NTFS system, I do not recall seeing the quantity of disk clusters used with these $jrnl$ names, so I do not recall this being necessary in this quantity for an NTFS file system itself? I understand that it would not be there, if it did not have a useful function :-) Information about how wonderful is fine, if that information will help track down what parts of the system create and use it. Change Journals states: Change journals are also needed to recover file system indexing Hmm, that might explain some of them, or why it was left on the disk. A crash while background indexing?

    Read the article

  • SVN hangs on commit - any suggestions for troubleshooting?

    - by Richard Beier
    We're having a problem with SVN... Subversion clients such as TortoiseSVN hang when we commit any more than a few files at a time to our server. Everything appears to actually be committed successfully to the repository; but the client hangs after all the data has been transmitted. We're using version 1.4.4 of the SVN server. We use the svn:// protocol rather than http to connect. We've reproduced this problem with several clients: TortoiseSVN (1.6.10), AnkhSVN (2.1), and the Silk command-line client (1.6.12). This is happening for everyone on the team, though some people seem to be more affected than others. If someone commits only a few files, it often works; but with more than half a dozen files, it usually hangs. Does anyone have troubleshooting suggestions? This has been happening sporadically for a while, but it's become pretty consistent lately. We've been working around the issue by killing the hung SVN client, doing "svn cleanup", and then doing "svn up"; but sometimes that causes tree conflicts. Another workaround is to blow away the workspace and check it out again after every commit; but of course that's pretty annoying. Are there any diagnostics that could help us troubleshoot this? We're considering upgrading to SVN 1.6 server, and installing the server on a new machine; but we're wondering if there's an easier solution. Thanks for your help, Richard

    Read the article

  • Windows authentication to SQL Server via IIS and PHP

    - by Jeff
    We're running a PHP 5.4 application on Server 2008 R2. We would like to connect to a SQL Server 2008 database, on a separate server, using Windows authentication (must be Windows authentication--the DB admins won't let us connect any other way). I have downloaded the SQL Server drivers for PHP and installed them. IIS is configured for Windows authentication, and anonymous authentication has been disabled. $_SERVER['AUTH_USER'] reports our currently logged on Windows account. In php.ini, we have set fastcgi.impersonate = 1. When we setup a connection using the following code from Microsoft: $serverName = "sqlserver\sqlserver"; $connectionInfo = array( "Database"=>"some_db"); /* Connect using Windows Authentication. */ $conn = sqlsrv_connect( $serverName, $connectionInfo); if( $conn === false ) { echo "Unable to connect.</br>"; die( print_r( sqlsrv_errors(), true)); } We are presented with the following error message: Unable to connect. Array ( [0] => Array ( [0] => 28000 [SQLSTATE] => 28000 [1] => 18456 [code] => 18456 [2] => [Microsoft][SQL Server Native Client 11.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. [message] => [Microsoft][SQL Server Native Client 11.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. ) Is it possible to connect to SQL Server 2008 via PHP using Windows authentication? Are there any additional required settings we need to make on IIS, SQL Server, or any other component (like a domain controller)?

    Read the article

  • How to connect a Win XP PC and Win 7 PC with a Linksys WRT54G Router to a PPPOE connection ?

    - by ChristianM
    I have two PCs and a WRT56G Router. My provider is a PPPOE connection (username and password). I can connect my Windows 7 easily. Even if I choose Auto or PPPOE on the router's configuration. Maybe because this one had the connection in the first place with the username and password set up. But I can't connect the Windows XP PC. Actually it does connect in a way, but it receives and sents just a few packages. I can actually see in My Network Places how many packages does the Gateway make and it's a big difference. And it won't open any page. How should I set up the router so I can give enough bandwith to the Windows XP PC ? I've seen another problem that I forgot to mention. The router can't connect with the pppoe settings. I don't know why, but with Auto DHCP setting and my Windows 7 connected with pppoe settings works. EDIT2: I've installed Win 7 on the other PC so I won't have any problem of compatibility betwen them. But I still can't connect the router with PPPOE settings to the internet. It looks like this: http://img641.imageshack.us/img641/9780/capturefba.png If you need any other screens or info, please tell me.

    Read the article

  • Outlook VBA script - find and replace text with image

    - by user2530616
    I have a e-commerce store. When I get a sale, I receive an order confirmation email which contains the name of the product sold. When the email comes through, I would like to run a script that replaces the product name eg. "red widget", with a picture of that product. Is that possible? I have found a similar code to replace text (set of numbers in this case) with a link, but I need it to replace with a picture instead. Option Explicit Sub InsertHyperLink(MyMail As MailItem) Dim body As String, re As Object, match As Variant body = MyMail.body Set re = CreateObject("vbscript.regexp") re.Pattern = "#[0-9][0-9][0-9][0-9][0-9][0-9]" For Each match In re.Execute(body) body = Replace(body, match.Value, "http://example.com/bug.html?id=" & Right(match.Value, 6), 1, -1, vbTextCompare) Next MyMail.body = body MyMail.Save End Sub example mail Order Confirmation Thanks for shopping with us today! ------------------------------------------------------ Order Number: 2209 Date Ordered: Friday 28 June, 2013 Products ------------------------------------------------------ 1 x red widget = $5.00 ------------------------------------------------------ Total: $0.00 Delivery Address xxx search text: "red widget" replace picture: redwidget.jpg

    Read the article

  • Can't authorize a server for Amazon RDS

    - by Parris
    We are attempting to slowly migrate a website over to AWS among other things. We decided the first thing to move was the database. We have some dedicated server with a different hosting provider. We only have one IP. I am having trouble authorizing the ip so that the old server can connect to RDS. It simply hangs for a while while using the mysql cli, then responds: ERROR 2003 (HY000): Can't connect to MySQL server on 'db.address.us-east-1.rds.amazonaws.com' (110) It did work on my laptop though. I am not quite sure what is wrong. I have a feeling I don't quite understand CIDR/IP. I simply took the ip address and tacked on /32 at the end. Then I gleaned some information that it also has to do with subnet mask? ifconfig reports: 255.255.255.0 I found a calculator and the IP changed a bit and had /24 at the end. That still didn't work. One other note... perhaps i dont know enough about the differences between OS. The hosting provider is using centOS, while our development machines are all ubuntu. Any insight would be extremely helpful! THANKS :)

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • How to limit SMTP delivery to hourly batches

    - by Jeremy W
    Moved over from StackOverflow. Sorry if you saw it there first In an effort to keep us from being labeled spammers by major ISPs (in addition to SPF records, privacy policies, CANSPAM compliance and the like) - I wanted to limit the amount of mail we send out an hour. Is this possible in W2K3 SMTP server? I was looking at outbound connection properties in the SMTP virtual server config screens...It's just not that clear if tinkering with those settings are going to do what I want. In a nutshell, I'd love mail being sent by this server to queue up and send for example, 5,000 messages every 10 minutes or so. Mail is being sent via ASP.Net. Also, I wouldn't be sending 1 million a day. Probably 30,000 tops - and doing that only a few times a month. I'm just trying to avoid a tidal wave of 30k going out in 1 minute and setting off every network spam monitoring alarm in North America. I know I could do it with a combination console app / scheduled job. My question was if there was an easier way to accomplish this with the Virtual SMTP Server settings on Win2k3 Is this possible?

    Read the article

  • Syncing Multiple Google Calendars and with Outlook and Android

    - by Fred Thomas
    Perhaps this is a multipart question, but I deal with a lot of calendars in my life, and want to know if there is some way to sync them all together, and maintain appropriate privacy. So I have a family calendar that my ex and I maintain for kid events, and I have a personal calendar for my own life, and I have an Outlook work calendar, for work. Ideally I'd look at my calendar on my Android phone. Is it possible to sync them all together? Is it possible for there to be one calendar to rule them all on my phone, but have the other calendars blank out spaces that are from other calendars, but only show the blanked out without the details. (I don't want my date with Miss Hottie to appear that way on the family calendar, and I probably don't want my visit to the proctologist to appear in the corporate exchange server.) Are there tools available to do this? Bonus question, can I do the same with my to do lists? Double bouns question -- how can I solve world hunger and help us to all live together in peace? :-)

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >