Search Results

Search found 2252 results on 91 pages for 'facing'.

Page 64/91 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Finding useful crash-information in Windows 8 Consumer Preview

    - by Lukas Knuth
    I'm currently diving into C# and wanted to play around with the new Metro-styled-applications introduced with Windows 8, so I updated my Windows 7 to Windows 8 Consumer Preview. The problem I'm facing right now is, that the system freezes after 3-5 minutes. It does not take any input from the keyboard or mouse and it does not recover (at least not in less then 10 minutes). Since I have a background in Linux, I'd like to find some information about the cause of the freeze, but I have no idea where to search. I checked the system-logs (under "System Control" - "Management") but they only record that the system was shut down unexpectedly (doe to the face that I held down the power-button to reboot the PC). There is no useful crash-information in there. I don't want to spend hours on randomly reinstalling drivers and doing things that "might help". Isn't there any place I can find some useful information about the freeze? Before you ask: I installed Windows 8 as an updated on my old Windows 7 installation (which worked fine by the way). My hardware fits the minimum requirements (specs can be found here, the MacMini 3,1 model with 2GHz processor). I have updated the graphics-card drivers to the newest Windows 8 drivers from nVidia.

    Read the article

  • grep command is not search the complete pattern

    - by Sumit Vedi
    0 down vote favorite I am facing a problem while using the grep command in shell script. Actually I have one file (PCF_STARHUB_20130625_1) which contain below records. SH_5.55916.00.00.100029_20130601_0001_NUC.csv.gz|438|3556691115 SH_5.55916.00.00.100029_20130601_0001_Summary.csv.gz|275|3919504621 SH_5.55916.00.00.100029_20130601_0001_UI.csv.gz|226|593316831 SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_NUC.csv.gz|368|3553014997 SH_5.55916.00.00.100038_20130601_0001_Summary.csv.gz|276|2625719449 SH_5.55916.00.00.100038_20130601_0001_UI.csv.gz|226|3825232121 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 SH_5.75470.00.00.100015_20130601_0001_NUC.csv.gz|425|1627227450 And I have a pattern which is stored in one variable (INPUT_FILE_T), and want to search the pattern from the file (PCF_STARHUB_20130625_1). For that I have used below command INPUT_FILE_T="SH?*???????????????US.*" grep ${INPUT_FILE_T} PCF_STARHUB_20130625_1 The output of above command is coming as below PCF_STARHUB_20130625_1:SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 I have two problem in the output, first is, only one entry is showing in output (It should contain two entries) and second problem is, output contains "PCF_STARHUB_20130625_1:" which should not be came. output should come like below SH_5.55916.00.00.100029_20130601_0001_US.csv.gz|349|1700116234 SH_5.55916.00.00.100038_20130601_0001_US.csv.gz|199|2099616349 Is there any technique except grep please let me know. Please help me on this issue.

    Read the article

  • Log Shipping breaking daily on SQL Server 2005

    - by IT2
    I am facing a somewhat serious problem with Log Shipping on SQL Server 2005 and I am having trouble to correct it, so I will try some help from SF's experts. I have a Windows 2003 Server (PROD) that ships transactional log backups to another two servers: STAND1: Windows 2003 Server with SQL Server 2005. STAND2: Windows 2008 R2 Server with SQL Server 2005. The problem is that Log Shipping to STAND2 is breaking for ~ 90 minutes some times of the day and returning back without intervention. The breaking occur at times when the backup file is larger (after reindexing, etc). I can see the message below logged on the COPY job: *** Error: The specified network name is no longer available The copy agent was breaking dozens times a day only to STAND2 server, and after the changes below "only" breaks ~ two times a day: The frequency of the backup job was changed from 5 minutes to 10 minutes. Instead of backing up the 4 databases to the same folder, the log backups are now saved on separated folders for each database. The backup job doesn't run 24hs now, and only for 14 hours a day, when people are working on the database. I configured the SQL Server instances on the three servers to limit the memory, leaving more memory to the OS. Now I don't know what to do. Any help will be much appreciated! Thanks!

    Read the article

  • How to make Virtualbox, OpenVPN, and Win2008 Web R2 like one another?

    - by Aquitaine
    Back with web developer guy wearing net admin hat. Hopefully this is an easy one. We have two servers on a public network at a hosted facility. Server A is our public-facing web server and server B is our database server. Both are running Windows 2008 Server R2 Web Edition. We want Server B isolated from everything except Server A, such that anyone who has to connect to server B goes through the VPN on Server A. It's not perfect since we have no access to do this on the router side, but it's what we've got. We've set up VirtualBox and OpenVPN Access Server on Server A. It has one network interface set to 'NAT' mode, such that OpenVPN gets its IP at 10.0.2.x, and to connect to the OpenVPN interface, I go to the local IP for the Virtualbox network adapter, 192.168.56.x, which works as I configured the appropriate ports using VBoxManage. My question is, do I need to be using Bridged Networking and give the VPN server its own IP, or is there some way to tell the server (either Windows or the Virtualbox OpenVPN) that 'any public connection on the real external IP on port X should be directed to this internal LAN address of 192.168.1.x on port Y'? OpenVPN itself doesn't seem to be aware of the server's real external IP unless we put it in Bridged networking mode; is that necessary or advisable? We're without RRAS since this is Web edition, but I feel like what we're going for is pretty simple. Thanks! Aq

    Read the article

  • How do I load balance between two Linux machines?

    - by William Hilsum
    Inspired by the Stack Overflow network, I am now obsessed with HAProxy and trying to use it myself. At the moment, each HAProxy box has got two network cards (well, two configured, I can have a maximum of 4 and wasn't sure if they needed their own one for management between the boxes). On both machines, the backend one (eth1) is a private IP that goes to a switch connected to the webservers, and the front facing one (eth0) has a public internet IP that is routed straight though. In addition, I have created an additional virtual ip for eth0 called eth0:0 which has got a third public ip address. I just about get how to use it for load balancing between multiple web servers that are behind it, but, I am failing to load balance between the two HAProxy boxes - they appear to fight for the virtual IP, but, this does not appear to be a smart solution. Now, by using the virtual shared IP address, this solution appears to work and does seem to give me maximum uptime, but, is this the correct way to do it, or is there a smarter way? I have been looking at other Linux packages such as keepalived, but, I have only been using Linux (server) for a week now and am at the limits of my understanding. Is there anyone who has done this before and can you advise anything for maximum uptime?

    Read the article

  • DNS Replication on Server 2008 R2

    - by Aaron
    Hi There, I have been trying out public only facing DNS servers with Server 2008 R2 Web - I've wanted to setup at least 2 in a master/slave replication. Using Microsoft DNS I am able to add in the domains into the primary zone on the master DNS server (ns1) and add the records ok and have them visible publically. On ns2 I can then add in the same domain but as a secondary zone and get them to replicate / zone transfer fine. Is there a way inside of Windows to have the slave(s) automatically synchronise all the changes from the master? For example it's ok if i have manually added the domains onto each of the NS's but if i add a new zone on the master i have to add it on the slave before it replicates. I installed Simple DNS and they have a 'Super Master/Slave' which takes care of exactly this whereby if you add a new domain into the primary zone it is automatically created and kept in sync on NS2 but i would have to buy a licence. All this is non active directory if that helps. Can anyone advise if it is possible to do this using Microsoft DNS? Many Thanks in Advance!

    Read the article

  • Howto change Axis server-config.wsdd sothat we don't expect a SOAPAction

    - by GKForcare
    The problem I'm facing is that the client of my service will never send me a SOAPAction header. How can I tell Axis to still map to the incomming call to my service implementation anyway. I did bump into tricks like adding a Handler like this: <handler name="ReportMapper" type="java:com.mycompany.project.ReportMapper"/> <transport name="http"> <requestFlow> <handler type="ReportMapper"/> <handler type="URLMapper"/> <handler type="java:org.apache.axis.handlers.http.HTTPAuthHandler"/> </requestFlow> <parameter name="qs:list" value="org.apache.axis.transport.http.QSListHandler"/> <parameter name="qs:wsdl" value="org.apache.axis.transport.http.QSWSDLHandler"/> <parameter name="qs.list" value="org.apache.axis.transport.http.QSListHandler"/> <parameter name="qs.method" value="org.apache.axis.transport.http.QSMethodHandler"/> <parameter name="qs:method" value="org.apache.axis.transport.http.QSMethodHandler"/> <parameter name="qs.wsdl" value="org.apache.axis.transport.http.QSWSDLHandler"/> </transport>

    Read the article

  • I/O Error on LG GSA-H12N DVD drive on Windows 7

    - by Ashwin
    I am facing an I/O Error when I try to burn DVD data discs on my LG GSA-H12N DVD drive on Windows 7. Note that I was able to do this same operation on the same hardware/software just a day earlier without any problems, but with Windows XP. The only change (AFAIK) has been the installation of Windows 7 to replace Windows XP on this PC. Here is the error I get when I try to burn a DVD data disc using CDBurnerXP 4.2.7.1801: Burning error occured An error occured while burning the disc. Most likely the disc is not usable. Usually these errors happen if the inserted media is not compatible to the drive or of poor quality. (devNTSPTI_IO_Error) Could not write to Disc (LBA: 52864 Length: 32). SCSI Pass-through Interface I/O Error. - 0xFF045D Note that there can be no problem with the discs since I have been using the same discs (from the same box) just before the Windows 7 installation with no problems. The only change has been Windows 7. I tried InfraRecorder v0.5 and ImgBurn v2.5 and got similar I/O errors: Note that Windows 7 lists the LG GSA-H21N drive as being compatible (see this link). I also checked the LG Drivers website and using the firware update from there updated the drive firmware from version UL01 to UL02. But, even this has not helped. The drive reads DVDs without any problem, but continues to produce coasters. Could someone help me figure out what is the problem? Thanks :)

    Read the article

  • Firefox will not remember local site cookie

    - by Campo
    This is a weird one. We have a production server (Server 2008) and two staging servers (Server 2008 and Server 2003) I have sites on all of these. They all use cookies. On the Production server when browsing to our site www.supernovainteractive.com there is a cookie that detects when you visted the site and it will not refresh the logo animation (top left hand side) on clicking to another page. This works for all browsers on the production server. I’m not sure what’s going on but for some reason cookies are not working on one site in the 2008 staging server only. This is when browsing using Firefox (3.6.3) they work fine on all other browsers (IE, Chrome, Safari, Opera) In addition, the 2003 staging server works fine. You can test on the Supernova Interactive site by noticing the logo in the top left corner. It uses a cookie to detect if you’ve already seen the animation. Once you’ve seen it once, it doesn’t animate again until tomorrow. Currently, it’s animating every time. I have opened an outside facing port so others can see the issue. Http://exchange.supernova.com:10009 Any ideas on this one? Firewalls are off on the server. Notice you do not get a cookie from Exchange.supernova.com.

    Read the article

  • The great Vanishing Act of INetMgr.exe on my Windows 7 x64 system

    - by marc_s
    I'm facing an odd issue with the IIS Manager on Windows 7 (x64). At home, I have Win7 Professional, and when I check my IIS manager icon in the start menu, I see it links to %windir%\system32\inetsrv\InetMgr.exe When I launch this from the command line, it works like a charm. At work, however, I have Windows 7 Enterprise (x64), and when I check my link in the start menu, the entry is exactly the same. If I click on it - it works like a charm. Now if I'd like to launch it from the command line (cmd.exe or TakeCommand), however - the file just isn't there - a DIR %windir%\system32\inetsrv\*.exe shows a number of files, including a "inetmgr6.exe" - but no "inetmgr.exe" - and of course, I can't launch it either :-( Strangely enough, when I look at the directory %windir%\system32\INetSrv in Windows Explorer or Windows Powershell, I SEE the INetMgr.exe file and I can launch it - no problem. What the **** is going on here? How can I find the INetMgr.exe from my classic command line and launch it from there?? UPDATE: ok, some updates. On my work laptop, the INetMgr.exe file appears to really be located in a directory called c:\windows\syswow64\inetsrv (I'm recalling from memory, so don't quote me on the directory name - something like that). I can see this if I search for it in e.g. Powershell or Windows 7 Explorer. However, from a "classic" command line like cmd.exe, it appears to be in c:\windows\system32\inetsrv ..... hmmm.... trouble is - even though I now know where the file really is, I cannot access that directory from my classic command line - not even if I'm running cmd.exe as admin with elevated privileges....... so I know where the file is, but that still doesn't solve my problem :-(

    Read the article

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • E-mail duplication problem

    - by Gavin Osborn
    I have taken out a hosting agreement with a well respected hosting provider for a couple of internet facing servers. We have deployed several applications to these servers which send various e-mails back to us for reporting purposes. Context: Each server runs Windows Server 2003 R2 with the IIS 6.0 SMTP service installed. Each application is configured to use the local instance of IIS to send e-mails. The external IP address of each server is mapped to a particular domain eg: server1.mydomain.com server2.mydomain.com These e-mails are sent from a company domain name and not the domain name of the hosted servers (eg: [email protected]) Symptoms: A small number (<1%) of e-mails sent from these applications appear to be duplicated. These are exact duplicate in terms of both content and message headers. The Fix: I contacted my hosting provider and they told me this was a common problem & instructed me to: Change the HELO response of your mail server service to a FQDN (server1.mydomain.com && server2.mydomain.com) Create a DNS A record that resolves the FQDN of your mail server to the primary IP address of your sending mail server. Create a PTR record that resolves your primary IP address back to your mail server's FQDN In the sending domain's (mycompanydomain.com) DNS zone file, add the appropriate SPF record for your hosted servers. eg: v=spf1 a mx include:mydomain -all The Problem Continues: I made all of the changes as prescribed above, I was a little hesitant because these steps seemed to suggest they were more for stopping your messages getting blocked than they were for stopping them from being duplicated - but I am certainly no expert in these matters. It has been 5 days since I applied this fix and the problem still persists. I am certain that these problems are not a bug in the software because they are 4 different applications installed on 2 different servers, all of whom are exhibiting this strange behaviour. This behaviour has also not been seen in our UAT environment. Were my hosts correct to suggest this fix? If not, does anyone know what could be the cause of this problem? Many Thanks

    Read the article

  • TFTP Timing Out on Ubuntu VM

    - by valsidalv
    I'm running a Windows 7 PC with VMware installed which has my Ubuntu (10.04 Lucid Lynx). I recently installed a DHCP server and TFTP (Xinet tftpd) using these instructions. I've mapped a network drive so that my Windows has access to all the files in my VM through a 192.x.x.x IP address. I'm trying to throw some custom firmware onto a router. The router has its own built-in TFTP utility that will download the image. It successfully manages to do everything but it is slow because it writes it to flash memory. There is another method that is much quicker because it writes to RAM directly but it must use the TFTP server in Ubuntu. The issue I'm facing is that the Ubuntu TFTP transfer seems to be timing out. The transfer starts but never goes past ~60%. Here's my /etc/xinetd.d/tftp file (similar to a known working config): service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -s /home/user/tftp/ disable = no cps = 300 2 per_source = 60 } I've done some searching but can't find any parameters for this file to control timeout time or number of retries. The last two arguments (cps, per_source) and completely alien to me (can anyone explain). I have a few possible solutions but the easiest would be to get this TFTP server working. Can anyone help? Either with a timeout configuration or maybe even recommend a different TFTP server? Thanks!

    Read the article

  • Restrict SSH user to connection from one machine

    - by Jonathan
    During set-up of a home server (running Kubuntu 10.04), I created an admin user for performing administrative tasks that may require an unmounted home. This user has a home directory on the root partition of the box. The machine has an internet-facing SSH server, and I have restricted the set of users that can connect via SSH, but I would like to restrict it further by making admin only accessible from my laptop (or perhaps only from the local 192.168.1.0/24 range). I currently have only an AllowGroups ssh-users with myself and admin as members of the ssh-users group. What I want is something that works like you may expect this setup to work (but it doesn't): $ groups jonathan ... ssh-users $ groups admin ... ssh-restricted-users $ cat /etc/ssh/sshd_config ... AllowGroups ssh-users [email protected].* ... Is there a way to do this? I have also tried this, but it did not work (admin could still log in remotely): AllowUsers [email protected].* * AllowGroups ssh-users with admin a member of ssh-users. I would also be fine with only allowing admin to log in with a key, and disallowing password logins, but I could find no general setting for sshd; there is a setting that requires root logins to use a key, but not for general users.

    Read the article

  • Howto change Axis server-config.wsdd so that we don't expect a SOAPAction

    - by GKForcare
    The problem I'm facing is that the client of my service will never send me a SOAPAction header. How can I tell Axis to still map to the incomming call to my service implementation anyway. I did bump into tricks like adding a Handler like this: <handler name="ReportMapper" type="java:com.mycompany.project.ReportMapper"/> <transport name="http"> <requestFlow> <handler type="ReportMapper"/> <handler type="URLMapper"/> <handler type="java:org.apache.axis.handlers.http.HTTPAuthHandler"/> </requestFlow> <parameter name="qs:list" value="org.apache.axis.transport.http.QSListHandler"/> <parameter name="qs:wsdl" value="org.apache.axis.transport.http.QSWSDLHandler"/> <parameter name="qs.list" value="org.apache.axis.transport.http.QSListHandler"/> <parameter name="qs.method" value="org.apache.axis.transport.http.QSMethodHandler"/> <parameter name="qs:method" value="org.apache.axis.transport.http.QSMethodHandler"/> <parameter name="qs.wsdl" value="org.apache.axis.transport.http.QSWSDLHandler"/> </transport> but that did not help. The mapper is found during the creation of the WSDL but when calling the service, the invoke of the handler is not used. I do need to note that when I simulate the SOAP-call using @curl@ and I do add the SOAPAction header, the invoke is called. Any help would be most appreciated.

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql however only one database is running and 5-10 pages are only running. However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • heavy load on mysql

    - by payal
    i have dedicated server with very good configuation like 16 gb ram etc but i am facing heavy load from mysql i am running a music wesbite however only one database is running and 5-10 pages are only running.when i click on whm show processlist it shows only 2-3 processes However whm load is always less than one but when i click on whm load it shows 20% of cpu usage by mysql and after some time it starts saying can not connect to mysql . mysql server has gone away 1691 (Trace) (Kill) mysql 0 19.2 2.7 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log- error=/var/lib/mysql/server.xyz.com.err --pid-file=/var/lib/mysql/server.xyz.com.pid i have tested static pages they are coming blezing fast but all dynamic pages which are using mysql is coming damn slow it takes years to open.. my.conf file is [mysqld] key_buffer = 1536M max_allowed_packet = 1M max_connections = 250 max_user_connections = 15 wait_timeout=40 connect_timeout=10 table_cache = 512 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M server-id = 14 old-passwords = 1 [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash [myisamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [mysqlhotcopy] interactive-timeout ihave checked log error file it says nothing.i have increased maximum connnection also to 1000 but still same problem is there .if i disconnect that one databasejust by changing the name of database i can see withing half hour the load of server and mysql goes down to negliglble .i have tested everything and if there are some query which can cause heavy load to server can you please list which type of query can cause heavy load on server then also for 5-10 pages it will never cause that much heavy load. i have seen server with 500 websites but was working just fine.

    Read the article

  • CD/DVD Drive not detecting locally burned CD/DVDs, but works fine with Genuine discs.

    - by Rahul
    I'm using Dell Inspiron 1420 - 32 Bit - Windows Vista, since 2.5 years. I'm facing a strange problem with my CD/DVD-drive. I cannot run/play a CD/DVD which I get burned from my friends. But when I insert Genuine CD, I'm able to play/run it. And when I try to install my Vista package which I got with my notebook, the CD/DVD gets loaded. If I insert a CD/DVD which I get from my friend, CD doesn't get loaded and the system gets hanged. But all these CDs/DVDs work on other systems. I've tested it on many of my friends PCs. So, now I'm able to run only genuine CDs & a few genuine DVDs. My Experience/Experiments: I tried to install Windows Vista using Genuine DVD - It worked I tried to install Ubuntu which I got from shipped from Canonical Ltd. - It worked I tried to install OpenSUSE .iso file burned to a DVD in my friend's PC - It didn't work for me (But working perfectly fine in my friends PCs(Tested in 4 other PCs) Tried to play a DVD containing movies, burned in my friend's PC - It didn't work for me (But working perfectly fine in my friends PCs Any help/suggestions would be appreciated. Thanks.

    Read the article

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • LYNC 2010 Dial-In in Meeting DTMF issue

    - by user140116
    We are facing an issue in the LYNC2010 dial-in to a meeting. We redirect an Asterisk number to LYNC, whitch connects successfully in the dial-in plan of LYNC. After calling from external network to the given number, we hear LYNC aswering and prompting us to enter the PIN and afterwards the hash key. I should mention that all other dials to LYNC from Asterisk and vise versa are routed successfully. Also all DTMF we send to Asterisk from the phone (IVR, Extension, PIN etc) are routed also fine Afterwards we press the appropriate pin folowed by the hash keyand we get 'Sorry I can't find meeting with that number' Some pros mentioned that it might be dtmfmode=RFC2833 or dtmfmode=auto in Asterisk (All checked and tried). Some pros mentioned, that there is a problem in geeral in LYNC and DTMF (even with Cisco Call Manager). Some other pros mentioned that chack box 'Enable refer support' in Voice Routinh\Trunk Configuration' in LYNC has to be unchecked (Also tested). The problem stil remains and there is no way to enter a meeting room by dial-in. ANY idea would be appreciated!!!!!!!!

    Read the article

  • CC.NET + SVN : Server certificate issue

    - by MSI
    I am trying to setup Continuous Integration in our office. Being a puny little developer I am facing this supposedly infamous problem: " Source control operation failed: svn: OPTIONS of 'https://trunkURL': Server certificate verification failed: issuer is not trusted" So I tried the following solution - Run CC.NET service (server running as win service) using a domain account (rather than default LOCAL SYSTEMS) and accept cert permanently using command prompt under that user by using svn log/list on the repo. Doesn't help :(. I am getting the following from my artifact/log files(or dashboard) ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://TrunkURL': Server certificate verification failed: issuer is not trusted (https://ServerAdd) . Process command: E:\(svn.exe Path) log https://TrunkURL -r "{2010-11-08T02:12:20Z}:{2010-11-08T02:13:21Z}" --verbose --xml --no-auth-cache --non-interactive at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModificationsWithLogging(ISourceControl sc, IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) We are using VisualSVN Server and CC.NET for this adventure. Tips, suggestions will be highly appreciated. Thanks

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • How to make the ec2 ami work on the Xen on Debian

    - by user67103
    Hi. Here is the scenario. We are creating a cloud App. We have a Debian on which we create a customized image and upload it to the Amazon EC2. After uploading it to the cloud we have made some more customizations and are trying to rebundle it. We are facing some issues in rebundling it. We would like to know if we could do something like this. a. Create an AMI Image on Debian b. Load it on to the Xen Hypervisor which would be over the Debian c. Customize the image d. Save the customized image e. Upload it to EC2. The issue is that I am unable to find a proper solution on how to install Xen on Debian and will the AMI on Xen work on EC2. Any suggestions/Help would be greatly appreciated. thanks. Krishna

    Read the article

  • overriding default scheduler for blkio requests in cgroups

    - by Aamir Mushtaq
    I am trying to optimize a set of servers that have to reside on single machine. i.e. i can have multiple application server, a DB server and of course a samba server as well in same instance. Now I was looking into several optimizing options available to me. In my quest, i did my tuning of the network stack. coming to the CPU, MEMORY and the BLKIO tweaks, i am using CGROUPS. The problem i am facing is that for enhanced performance in the nature of the applications that i am running, the CFQ Scheduler that is implemented for the BLKIO subsystem is not optimal. I was looking more for a Deadline Scheduler because that will serve my purpose well. My question is whether it is possible for us to change the scheduler in the kernel compilation itself for the BLKIO to Deadline and it will reflect in my usage of [CGROUP hierarchies][3]? Since when running the service cgconf, a new fs is mounted and i dont want it to revert to CFQ scheduler. I also welcome any suggestions that will enable me to have more control over my resources.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >