Search Results

Search found 52182 results on 2088 pages for 'something for the weekend'.

Page 303/2088 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • System failure - need diagnostic recommendation

    - by Ladislav Mrnka
    I have big problem with my computer. Configuration is: Intel i7 + 6x2GB OCZ DDR3 Motheboard: Asus P6T Deluxe V2, HDD controller configured to AHCI Main drive: OCZ Vertex 2 (SSD) - contains all installed programs and system Second drive: Samsung SpinPoint - contains User profiles, ProgramData, virtual machines and databases Third drive: Samsung SpinPoint - data drive + backups OS: Windows 7 Ultimate x64 I have never had any problem with this computer until now. During weekend my computer completely crashed without any reason. Each time I tried to boot to Windows I got BSOD with message BAD_SYSTEM_CONFIG_INFO and automatic restart (I didn't install any new SW or HW). But after restart main OCZ drive was disconnected (not detected by BIOS). When I turned off and on computer, the drive was again connected. It also happend every single time I tried to repair installation somehow. It ended with some error and after restart drive was disconnected. The only thing which worked was format + fresh install. After installing almost everything I wanted to install Visual Studio 2010 Ultimate (complete installation without SQL Server Express). During installation of VS itself I always get BSOD - it is too fast so I'm not able to read description. After restart it searches for all disk drives for really long time and sometimes it changes boot drive so the system is not able to start - Bootmgr not found. After reconfiguring BIOS the system starts. There is no event describing the failure in Event viewer. Installing VS 2010 is absolutely necessary for me. I need help with diagnostic. I need to find where is the problem - I expect that the problem is in OCZ drive or in HDD controller on motherboard but I don't know how to find it. All components still have valid warranty. Can you recommend me some approach or tools to find the problem? Edit: I'm still looking for source of the problem. New information is that Windows are not able to perform check disk (Chkdsk) on the SSD system drive. After restarting it always starts checking drive and in part where files are checked it fails with BSOD - BAD_SYSTEM_CONFIG_INFO. After next restart and skipping check disk tests the system runs.

    Read the article

  • I have been told to accept one error with Memtest86+

    - by DustByte
    Bought a new computer back in August with 4x4 GB RAM. Had problems with the RAM. They sent me four new sticks, which also generated errors. Singled out four sticks (from the eight I now had) that didn't generate any errors. Discovered by coincident a new RAM error last week (this time no BSOD). Contacted the company. According to them there have been issues with a bad stock from last summer so I got two tested 8 GB sticks sent to me. Been running Memtest86+ over the weekend. After 20 hours I got an error (see attached photo). The test has now been running for 37 hours but so far only this one error. I contacted the company where I bought the computer. They wrote back: I wouldn't worry about hat one fail. We have had similar situations here whereby it passes numerous times but then fails once. We think it's an issue with memtest, after all memory is faulty or it isn't so you can't really have it pass a few times, fail the next time around and then pass again! Please trust me on this and continue with the memory we sent you and if your problems continue we'll look at getting it replaced again. I gather from other forum posts that many people do not accept a single error. What could this single error signify, faulty RAM or a glitch in the MEMTEST program (or other)? Update: From the helpful comments below I conclude that an occasional (and rare) "random" error could occur and be acceptable, but repeated errors at the same address would indicate malfunction. Memtest has now run for 45 hours and I still have only one error. For everyone's information, I will keep running the test. In less than two days I am going away for a month. I will most likely leave Memtest running. As I do not have a UPS there is a risk that a power outage will ruin the experiment. The computer is a desktop so I cannot bring it with me (which would curiously have exposed it to more cosmic rays as I will be flying ;)).

    Read the article

  • Recovering from backup without original install media

    - by KGendron
    A machine from my old job had a complete hard drive failure. I have backups but I'm running into severe problems restoring from them. The only install media was a secondary restore partition on the system's hard drive. I hate whoever came up with that idea more than i can possibly express with words. I spent several days trying to recover the disk - it is pretty well shot and none of my best tricks could even get it to show up in the bios/ The machine that broke is an hp with xp media center edition on it (I don't know why either). The backups were created using the default windows backup tool - I have .bfk file on an external hardrive that i am trying to restore from. I've replaced the hard drive. My home machine is running windows 7 64bit and i'm trying to use it as a platform to restore to the other disk. I downloaded the window 7 nt-restore utility, however no matter what i do it restores to my C drive rather than the specified drive. Fortunately win7 security settings prevented it from being a complete disaster - but still not a happy thing. I tried firing up the xp virtual machine. I can browse to the backups but it says they are invalid and refuse to let me view/ continue with the restore. I tried installing XP to an extra harddrive on my machine - however it bluescreens on me during the install process and I cry. I tried installing xp pro to the new drive and attempted to restore over it, it of course blackscreened on me as that was a stupid idea. I made two partitions on the new hard drive (Apparently the bios on this accursed piece of junk doesn't allow hd partitions larger than 200G anyways and thus fails 40 minutes into the install with an ever-descriptive "Disk Read Error". Guess how i spent last weekend? My last idea was to install xp pro to the second partition and then use it to restore from backup to the first. After the first restart it gives me the error "Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware". My brain made one of those bad hard drive clicky noises. I've tried several boot disks but they don't seem to work. If anyone has a link to a good one it would be greatly appreciated. Anyone have any more ideas? - I really hate asking on what seems like such a simple issue but i am quite literally at my wit's end. Thanks - and sorry for the really long post.

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • Postgres - could not create any TCP/IP sockets

    - by Jacka
    I'm running a rails app in development with postgresql 9.3. When I tried to start passenger server today, I got: PG::ConnectionBad - could not connect to server: Connection refused Is the server running on host "localhost" (217.74.65.145) and accepting TCP/IP connections on port 5432? No big deal I thought, that happened before. Restarting postgres always solved the problem. So I ran sudo service postgresql restart and got: * Restarting PostgreSQL 9.3 database server * The PostgreSQL server failed to start. Please check the log output: 2014-06-11 10:32:41 CEST LOG: could not bind IPv4 socket: Cannot assign requested address 2014-06-11 10:32:41 CEST HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. 2014-06-11 10:32:41 CEST WARNING: could not create listen socket for "localhost" 2014-06-11 10:32:41 CEST FATAL: could not create any TCP/IP sockets ...fail! My postgresql.conf points to the defaults: localhost and port 5432. I tried changing the port but the error message is the same (except the port change). Both ps aux | grep postgresql and ps aux | grep postmaster return nothing. EDIT: In postgresql.conf I changed listen_addresses to 127.0.0.1 instead of localhost and it did the trick, server restarted. I also had to edit my applications' db config and point to 127.0.0.1 instead of localhost. However, the question is now, why is localhost considered to be 217.74.65.145 and not 127.0.0.1? That's my /etc/hosts: 127.0.0.1 local 127.0.1.1 jacek-X501A1 127.0.0.1 something.name.non.example.com 127.0.0.1 company.something.name.non.example.com

    Read the article

  • Windows telling me, the local security authority is internally inconsistent upon mounting a network drive

    - by acme
    Since ages I've mounted a network share (via samba to a Linux machine) in Windows 7 to access it via drive letter. This worked flawlessly so far. Until now. Suddenly I couldn't access the drive anymore. Windows was telling me the network name (I didn't remember the exact term) was already in use. So I disconnected and tried to connect again: net use Y: \\10.10.10.208\work After a long time I get a message saying "The Local Security Authority (LSA) database contains an internal inconsistency" A restart didn't help. The mapped share is accessible (works on other machines in the same network), so obviously something strange is going on on my machine. Can anyone tell me how I can fix this inconsistency? Update: All machines that have saved the login information refuse with this error. So it must be something with the authorization. When I use net use Y: \\10.10.10.208\work /user:raphael it prompts me for the password and then returns that error message.

    Read the article

  • Powerpoint missing from DCOM config

    - by Paul Prewett
    I have an application that automates the creation of powerpoint files in an ASP.NET environment. This requires that I install powerpoint on the server and also set permissions in the DCOM configuration snap-in (dcomcnfg) to give permissions to the launching user ([DOMAIN]\ASPNET in this case) to run the application. I have this setup running successfully on several Win2k3 machines. I am configuring my first Win2k8 machine and after installing powerpoint on the server, the "Microsoft Powerpoint Presentation" node in DCOM config is not showing up. Other installed Office apps are showing (Excel, Graph, etc...), just not Powerpoint. So when I attempt to run the application, I get an "Access denied" error, which is exactly what I would expect. The user doesn't have permission. Therefore, access denied. The specific error log entry is: The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {91493441-5A91-11CF-8700-00AA0060263B} to the user [DOMAIN]\ASPNET I searched the entire list for the CLSID, too, thinking maybe the name wasn't loading properly. No dice. I also re-ran the setup program for Office thinking maybe there would be some option or something I unchecked in the custom setup options, but I saw nothing that looked helpful. I'm flummoxed. Can anyone out there suggest something to help me get Powerpoint to show up in the list of DCOM applications? Many thanks.

    Read the article

  • Dell VRTX - slow cluster shared storage

    - by NorbyTheGeek
    I have a brand new Dell VRTX box set up as a Failover Cluster running HA Hyper-V virtual machines. This is my first time setting up clustering, and my first time with one of these boxes, so I'm sure I've missed something. The virtual machines are experiencing high disk latency and bad performance when accessing their VHD(x) files located on a Cluster Shared Volume. The VRTX has 10 x 900 GB 10K SAS drives in RAID 6 configuration, and the VRTX has the redundant Shared PERC 8 controllers. Both blades have full access to the virtual disks. There are two M520 blades installed, each with 128 GB RAM. MPIO is configured for the PERC 8 controllers. Operating system on the blades is Server 2012 (NOT R2). The RAID 6 array is split into a small (8 GB) volume for cluster quorum witness and a large (6.5 TB) volume for a Cluster Shared Volume (mounted on the nodes as C:\ClusterStorage\Volume1) An example of slow disk access: logging into a Server 2012 VM and having Server Manager come up automatically. Disk access goes to 100%, with write speeds at 20 MB or so, read speeds of 500 KB or so, and Average Response Time of over 1000 ms, sometimes spiking at 4000-5000 ms or so. It's the latency that really worries me. Is there something specific I should look at in my configuration? It doesn't seem to matter whether I use VHD or VHDX, dynamic or static.

    Read the article

  • How can I set IIS Application Pool recycle times without resorting to the ugly syntax of Add-WebConfiguration?

    - by ObligatoryMoniker
    I have been scripting the configuration of our IIS 7.5 instance and through bits and pieces of other peoples scripts I have come up with a syntax that I like: $WebAppPoolUserName = "domain\user" $WebAppPoolPassword = "password" $WebAppPoolNames = @("Test","Test2") ForEach ($WebAppPoolName in $WebAppPoolNames ) { $WebAppPool = New-WebAppPool -Name $WebAppPoolName $WebAppPool.processModel.identityType = "SpecificUser" $WebAppPool.processModel.username = $WebAppPoolUserName $WebAppPool.processModel.password = $WebAppPoolPassword $WebAppPool.managedPipelineMode = "Classic" $WebAppPool.managedRuntimeVersion = "v4.0" $WebAppPool | set-item } I have seen this done a number of different ways that are less terse and I like the way this syntax of setting object properties looks compared to something like what I see on TechNet: Set-ItemProperty 'IIS:\AppPools\DemoPool' -Name recycling.periodicRestart.requests -Value 100000 One thing I haven't been able to figure out though is how to setup recycle schedules using this syntax. This command sets ApplicationPoolDefaults but is ugly: add-webconfiguration system.applicationHost/applicationPools/applicationPoolDefaults/recycling/periodicRestart/schedule -value (New-TimeSpan -h 1 -m 30) I have done this in the past through appcmd using something like the following but I would really like to do all of this through powershell: %appcmd% set apppool "BusinessUserApps" /+recycling.periodicRestart.schedule.[value='01:00:00'] I have tried: $WebAppPool.recycling.periodicRestart.schedule = (New-TimeSpan -h 1 -m 30) This has the odd effect of turning the .schedule property into a timespan until I use $WebAppPool = get-item iis:\AppPools\AppPoolName to refresh the variable. There is also $WebappPool.recycling.periodicRestart.schedule.Collection but there is no add() function on the collection and I haven't found any other way to modify it. Does anyone know of a way I can set scheduled recycle times using syntax consistent with the code I have written above?

    Read the article

  • Configuring external SMTP server on Azure VM - messages staying in queue

    - by Steph Locke
    I have an external SMTP provider: auth.smtp.1and1.co.uk I am trying to send SQL Server Reporting Services emails via this on an Windows 2012 Azure VM. It is configured sufficiently correctly for emails to be generated, but I've not configured something or mis-configured something as the emails then stay in the queue. Setup details Configured SMTP Virtual Server General: IP Address: Fixed value Access: Access Control: Authentication: ticked Anonymous access Access: Connection Control: All except the list below (which is empty) Access: relay restrictions: Only the list below (which contains 127.0.0.1), ticked 'allow all..' option Delivery: Outbound Security...:Basic Authentication with username and password completed, ticked TLS encryption Delivery: Outbound connections...:TCP port=587 Delivery: Advanced: FQDN=ServerName, smarthost=auth.smtp.1and1.co.uk I then set the following SSRS rsreportserver.config values: <SMTPServer>100.92.192.3</SMTPServer> <SendUsing>2</SendUsing> <SMTPServerPickupDirectory> c:\inetpub\mailroot\pickup </SMTPServerPickupDirectory> <From>[email protected]</From> Tried so far 1) turning the smtp service off and on again (just in case) 2) run SMTPDiag with no errors (also no emails) 3) tried turning off the firewall for the ports (and more generally to see if it made a difference) 4) tried generation from powershell which resulted with message in queue 5) added 25 and 857 as endpoint 6) perused the event log and found some warnings that appear to be about the recipient Message delivery to the remote domain 'gmail.com' failed for the following reason: Unable to bind to the destination server in DNS. Message delivery to the host '212.227.15.179' failed while delivering to the remote domain 'gmail.com' for the following reason: The remote server did not respond to a connection attempt. 7) tried pinging but this appears to be blocked on azure 8) tried more powershell sending on different domains variants (localhost, boxname, internal ip used in smtp properties, 127.0.0.1) - none resulting in success 9) tried adding a remote domain - no change Could anyone recommend what step 10 should be in fixing this issue please?

    Read the article

  • net.tcp Listener Adapter and net.tcp Port Sharing Service not starting on reboot

    - by Peter K.
    I am using the net.tcp protocol for various web services. When I reboot my Windows 7 Ultimate (64-bit) macbook pro, the service never restarts automatically, even though that is how they are set: The only relevant events I can see are in the System Event Log: Error 6/9/2011 19:47 Service Control Manager 7001 None The Net.Tcp Listener Adapter service depends on the Net.Tcp Port Sharing Service service which failed to start because of the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7000 None The Net.Tcp Port Sharing Service service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7009 None A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect. This post suggests that it's something else blocking the port (in the post it's SCCM 2007 R3 Client which I don't use). What else could be the problem? If it's something else blocking the port, how do I figure out what? When I manually start the services, they start correctly. Dependencies are: Net.Tcp Port Sharing Service Net.Tcp Listener Adapter Still no luck, but I think the problem might be that my network connection takes too long to come up. I put in a custom view of the event log, and found these items: The first in the series says: A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect.

    Read the article

  • Why is my mouse randomly deselecting and unclicking?

    - by Coldblackice
    (Windows 7 x64, Logitech MX1100 mouse) If I click/hold/ the mouse, like on the title bar of a folder to move it, or to select text, the mouse will randomly "unselect" it and then randomly reselect at another point in the movement. For example, if I were to start mouse-selecting the above paragraph, starting with "movement." and then moving backwards, it might select as far as "reselect", but then the selection would disappear, only to start selecting again from "will randomly". I realize this would sound like a clear-cut case for a hardware issue in the mouse button, but I've narrowed out that that's not the case. The problem doesn't happen if I drag-move/drag-select slowly. But I can make the problem very apparent if I click and drag something fast. For example, if I click and hold the title bar of a window, and then start quickly dragging it around in circles across my monitor, the window will get "dropped", and a new window will get picked up in the process. Additionally, if I right-click anywhere to get a context menu (in Windows, programs, anywhere/everywhere), and then relatively quickly press the left mouse button to select something on the context menu, the context menu will disappear as if I had clicked "through" it. I haven't had any driver changes, system updates, or significant software changes/updates/installations recently, that might be a precursor to this issue. Again, the oddity seems to be the "speed" of action. Another note -- it seems that "lag" has a bit to do with it. If I click and drag a window around quickly, it might start to "lag" a tad bit, like it's perhaps moving too fast for Windows to keep up with the refresh/redraw rate, and that's usually synonymous with this odd deselect bug happening. (Batteries fully charged, no damage or recent changes to mouse, no changes that might affect/block wireless communication)

    Read the article

  • Windows Server 2008 / SQL 2008 Licensing for Authenticated Web Application

    - by MikeM
    Hello, I'm trying to crunch some numbers to see what the software costs involved are for hosting an application we are developing. Users will not be anonymous - they will need to log in. SQL Server 2008: SQL Server licensing is easy - it will be licensed per-processor. No real fuss there. The cost of CALs would be much higher for the number of users as compared to the processor licenses. Windows Server 2008: This is where it gets trickier. We need to license the OS for both the web servers (there will be a couple) plus the database servers (also a couple). The Web Servers could run on the Web Edition without a need for CALs, but if you continue reading, you will see that may not matter much because I will likely have user CALs for each user anyway. We can't use the "External Connector" for any of the Windows licenses, because that doesn't cover customers who are paying to access a hosted application. We can't use the Web Edition for the SQL Servers because that license only allows database running on Web Edition to host data for the local web application (i.e. other web servers can't connect to it). So that leaves us with the "full" editions of Windows Server for the database server OS. I find this a little rediculous, and I feel as though I must be missing something, but it looks to me like I will actually need to buy a CAL for every user who signs up to use our service. I feel like I'm missing something because that means that for every user, I have to shell out $40 for a CAL. That could be one or two years' worth of revenue from each user for an inexpensive service! Is there any way to serve a web application to authenticated users without paying for individual Windows Server CALs, if the web servers and SQL servers are seperate boxes?

    Read the article

  • Launching a PHP daemon from an LSB init script w/ start-stop-daemon

    - by EvanK
    I'm writing an lsb init script (admittedly something I've never done from scratch) that launches a php script that daemonizes itself. The php script starts off like so: #!/usr/bin/env php <?php /* do some stuff */ It's then started like so in the init script: # first line is args to start-stop-daemon, second line is args to php-script start-stop-daemon --start --exec /path/to/executable/php-script.php \ -- --daemon --pid-file=$PIDFILE --other-php-script-args The --daemon flag causes the php script to detach & run as a daemon itself, rather than relying on start-stop-daemon to detach it. This is how it's (trying to) stop it in the init script: start-stop-daemon --stop --oknodo --exec /path/to/executable/php-script.php \ --pidfile $PIDFILE The problem is, when I try to stop via the init script, it gives me this: $ sudo /etc/init.d/my-lsb-init-script stop * Stopping My Project No /path/to/executable/php-script.php found running; none killed. ...done. A quick peek at ps tells me that, even though the php script itself is executable, its running as php <script> rather than the script name itself, which is keeping start-stop-daemon from seeing it. The PID file is even being generated, but it seems to ignore it and try to find+kill by process name instead. $ ps ax | grep '/path/to/executable/php-script.php' 2505 pts/1 S 0:01 php /path/to/executable/php-script.php --daemon --pid-file /var/run/blah/blah.pid --other-php-script-args 2507 pts/1 S 0:00 php /path/to/executable/php-script.php --daemon --pid-file /var/run/blah/blah.pid --other-php-script-args 2508 pts/1 S 0:00 php /path/to/executable/php-script.php --daemon --pid-file /var/run/blah/blah.pid --other-php-script-args 2509 pts/1 S 0:00 php /path/to/executable/php-script.php --daemon --pid-file /var/run/blah/blah.pid --other-php-script-args 2518 pts/1 S 0:01 php /path/to/executable/php-script.php --daemon --pid-file /var/run/blah/blah.pid --other-php-script-args $ cat /var/run/blah/blah.pid 2518 Am I completely misunderstanding something here? Or is there an easy way to work around this?

    Read the article

  • phpMyAdmin: The additional features for working with linked tables have been deactivated.

    - by The Disintegrator
    I'm getting this error in the main page of phpMyAdmin verson: 3.2.1deb1 The additional features for working with linked tables have been deactivated. To find out why click here. When I click the link I get this report. $cfg['Servers'][$i]['pmadb'] ... OK $cfg['Servers'][$i]['relation'] ... not OK [ Documentation ] General relation features: Disabled $cfg['Servers'][$i]['table_info'] ... not OK [ Documentation ] Display Features: Disabled $cfg['Servers'][$i]['table_coords'] ... not OK [ Documentation ] $cfg['Servers'][$i]['pdf_pages'] ... not OK [ Documentation ] Creation of PDFs: Disabled $cfg['Servers'][$i]['column_info'] ... not OK [ Documentation ] Displaying Column Comments: Disabled Bookmarked SQL query: Disabled Browser transformation: Disabled $cfg['Servers'][$i]['history'] ... not OK [ Documentation ] SQL history: Disabled $cfg['Servers'][$i]['designer_coords'] ... not OK [ Documentation ] Designer: Disabled I already used the script to create the tables. I assigned the permissions to the pma user. And everything is set in /etc/phpmyadmin/conf.inc.php But it's still not working... The tables are empty. I assume that they should have something. I'm interested in the relations an history features. Obviously I have read the documentation. Maybe something else is unsetting those values? Any toughs?

    Read the article

  • Debian Apache2 and SSL

    - by Topher Fangio
    Hello all, I recently took over a server that is using Apache2 with SSL. I have setup a new server to which I am migrating all of the old websites so that we can more easily scale (it's a cloud server) and so that I can set everything up correctly (or at least with some sort of convention). I have read quite a few articles on setting up Apache2 and SSL with virtual hosts, but I'm a bit confused because all of the examples show three files and I only seem to have two. To compound the problem, they are all named differently (do the file extensions actually make a difference?). The examples show something to this effect: <VirtualHost X.X.X.X:443> ServerAlias something.mydomain.com ServerAdmin [email protected] DocumentRoot /var/www/project/client/site SSLEngine on SSLCertificateFile /etc/ssl/certs/mydomain-cert.pem SSLCertificateKeyFile /etc/ssl/private/mydomain-key.pem SSLCertificateChainFile /etc/ssl/certs/mydomain-ca.crt </VirtualHost> However, the files I have are: _.mydomain.com.crt gd_bundle.crt It is a wildcard certificate that we purchased through GoDaddy I believe. I believe that the first file is the actual certificate file and the gd_bundle.crt is the chain file, but that leaves me without a key file. There is also a random mydomain.csr file lying around on the old server, but it wasn't one of the files bundled with the download from GoDaddy, so I'm not really sure as to what it is. Any help in figuring out what I need to do would be greatly appreciated. I am software developer, so I know my way around computers, but I have only dabbled in server setup/maintenance. Much Thanks!

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • Cant access Dell BMC IPMI Over IP

    - by Bobb
    I have Dell R210 with iDRAC BMC (new name for old BMC). Which is on-board feature with shared NIC (I believe). Server is on colocation and I didnt set it up before sent there... So I asked for the remote hands to setup IPMI Over IP. They enabled it, set the IP and everything. The IP is different than main box IP. Also the box is cabled to NIC1 and the BMC supposed to share it (am I right?) I can see new IP in the Open Server Administrator (installed on the box). I tried Supermicro IPMI tool and I tried Dell ipmish.exe command like this ipmish -ip xxx -u root -p calvin sysinfo gives BMC is not detected What could be wrong? is there a diagnostics tool I can try? It must be something obvious. I just never used things like that before.... P.S. I read something about encryptions key in the Dell docs. But I understand that is for encrypted IPMI 2.0 and ipmish can use IPMI 1.5 without encryption.

    Read the article

  • Crazy problem with Nginx, PHP5-FPM on Ubuntu

    - by Emmanuel
    I've been trying to get a domain from shared hosting to my new VPS. Everything was working just 100% fine, and then all of a sudden rewrites stopped working, pictures that should work started returning 404s. I've got no idea why, but for some reason on my site: http://www.onlythebible.com/ only the home page works, all the other pages depend on rewrites which were working perfectly fine at one stage, but all of a sudden stopped working. Some of the pictures like this url: http://www.onlythebible.com/bgsPreview/Matthew-8.10.jpg which doesn't use a rewrite throws a 404? I almost certain it was nothing to do with the nginx configuration. I've got suspicions that it could be something to do with php5-fpm? The funny thing is, all of a sudden it started working again. And then an hour or so later it broke again and has now gone back to only displaying the home-page and all of the links (and some of the pictures) are just showing 404s. Does anyone have an idea of what the problem might be? I'm pretty new to the whole Linux VPS thing, but this just seems very strange. *edit Here's a line from the error log which might shed some light on the problem: 2011/02/06 03:04:59 [error] 2873#0: *220 open() "/usr/local/nginx/html/bgsPreview/Matthew-8.10.jpg" failed (2: No such file or directory), client: 114.77.115.211, server: onlythebible.com, request: "GET /bgsPreview/Matthew-8.10.jpg HTTP/1.1", host: "www.onlythebible.com", referrer: "http://www.onlythebible.com/" I wonder why it's trying to find the file in /usr/local/nginx/html instead of the proper root which is /var/www/ etc... Oh, and for some reason it's just started working again... for how long I don't know. Another thing that was a bit weird, is that the pages on my website are pulled from a database. But when I edited the database, the pages didn't change... It's almost like they've been cached or something.

    Read the article

  • LLBLGen Pro v3.5 has been released!

    - by FransBouma
    Last weekend we released LLBLGen Pro v3.5! Below the list of what's new in this release. Of course, not everything is on this list, like the large amount of work we put in refactoring the runtime framework. The refactoring was necessary because our framework has two paradigms which are added to the framework at a different time, and from a design perspective in the wrong order (the paradigm we added first, SelfServicing, should have been built on top of Adapter, the other paradigm, which was added more than a year after the first released version). The refactoring made sure the framework re-uses more code across the two paradigms (they already shared a lot of code) and is better prepared for the future. We're not done yet, but refactoring a massive framework like ours without breaking interfaces and existing applications is ... a bit of a challenge ;) To celebrate the release of v3.5, we give every customer a 30% discount! Use the coupon code NR1ORM with your order :) The full list of what's new: Designer Rule based .NET Attribute definitions. It's now possible to specify a rule using fine-grained expressions with an attribute definition to define which elements of a given type will receive the attribute definition. Rules can be assigned to attribute definitions on the project level, to make it even easier to define attribute definitions in bulk for many elements in the project. More information... Revamped Project Settings dialog. Multiple project related properties and settings dialogs have been merged into a single dialog called Project Settings, which makes it easier to configure the various settings related to project elements. It also makes it easier to find features previously not used  by many (e.g. type conversions) More information... Home tab with Quick Start Guides. To make new users feel right at home, we added a home tab with quick start guides which guide you through four main use cases of the designer. System Type Converters. Many common conversions have been implemented by default in system type converters so users don't have to develop their own type converters anymore for these type conversions. Bulk Element Setting Manipulator. To change setting values for multiple project elements, it was a little cumbersome to do that without a lot of clicking and opening various editors. This dialog makes changing settings for multiple elements very easy. EDMX Importer. It's now possible to import entity model data information from an existing Entity Framework EDMX file. Other changes and fixes See for the full list of changes and fixes the online documentation. LLBLGen Pro Runtime Framework WCF Data Services (OData) support has been added. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF Data Services application using the VS.NET tools for WCF Data Services. WCF Data Services is a Microsoft technology for .NET 4 to expose your domain model using OData. More information... New query specification and execution API: QuerySpec. QuerySpec is our new query specification and execution API as an alternative to Linq and our more low-level API. It's build, like our Linq provider, on top of our lower-level API. More information... SQL Server 2012 support. The SQL Server DQE allows paging using the new SQL Server 2012 style. More information... System Type converters. For a common set of types the LLBLGen Pro runtime framework contains built-in type conversions so you don't need to write your own type converters anymore. Public/NonPublic property support. It's now possible to mark a field / navigator as non-public which is reflected in the runtime framework as an internal/friend property instead of a public property. This way you can hide properties from the public interface of a generated class and still access it through code added to the generated code base. FULL JOIN support. It's now possible to perform FULL JOIN joins using the native query api and QuerySpec. It's left to the developer to check whether the used target database supports FULL (OUTER) JOINs. Using a FULL JOIN with entity fetches is not recommended, and should only be used when both participants in the join aren't the target of the fetch. Dependency Injection Tracing. It's now possible to enable tracing on dependency injection. Enable tracing at level '4' on the traceswitch 'ORMGeneral'. This will emit trace information about which instance of which type got an instance of type T injected into property P. Entity Instances in projections in Linq. It's now possible to return an entity instance in a custom Linq projection. It's now also possible to pass this instance to a method inside the query projection. Inheritance fully supported in this construct. Entity Framework support The Entity Framework has been updated in the recent year with code-first support and a new simpler context api: DbContext (with DbSet). The amount of code to generate is smaller and the context simpler. LLBLGen Pro v3.5 comes with support for DbContext and DbSet and generates code which utilizes these new classes. NHibernate support NHibernate v3.2+ built-in proxy factory factory support. By default the built-in ProxyFactoryFactory is selected. FluentNHibernate Session Manager uses 1.2 syntax. Fluent NHibernate mappings generate a SessionManager which uses the v1.2 syntax for the ProxyFactoryFactory location Optionally emit schema / catalog name in mappings Two settings have been added which allow the user to control whether the catalog name and/or schema name as known in the project in the designer is emitted into the mappings.

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    Hey all, We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim

    Read the article

  • On Life and TechEd&hellip;

    - by MOSSLover
    I haven’t been writing here I know.  I am very sorry, but I am just too busy trying to make my personal life just as good as my SharePoint life.  So I was at TechEd again helping with the hands on labs, but I had a very long couple of weeks.  I will have a long week again.  I started out going to my friend, Randy Walker’s, wedding and I ended up at my grandmother’s condo in Fort Lauderdale.  It was a very trying week.  I had to drive 5 1/2 hours to the wedding from St. Louis and back.  So Randy is an awesome person.  He is a great guy and he was the first person ever to nominate me for MVP.  I knew it probably wasn’t going anywhere, but it was the thought that count.  I met Randy in 2008 at Tulsa Techfest.  We had fun jamming out to Rockband.  I knew he was good people back then.  He has let me vent and I have let him vent over countless personal problems.  He has always been a great friend.  So it was a no brainer when I decided to go to his wedding no matter how much driving or stress or lack of sleep it was worth it.  I am incredibly happy for him to finally find a diamond amongst a lot of coal.  To take part in his celebration was so awesome.  I thank him again for letting me participate in this ceremony. Now after Randy’s wedding I drove 5 1/2 hours landed in St. Louis, fed a cat an asthma pill hidden in wet cat food, and slept for 4 hours.  I immediately saw my best friend who dropped me off at the airport and proceeded to TechEd.  I slept 1 hour on each flight, then ended up working a 3 hour shift at TechEd.  The rest of the week was a haze of connecting with people and sleeping very little.  I got to see my friend Tasha and her husband Casey plus a billion other people.  It was a great week and then I got a call from my grandmother.  It turns out her husband was admitted to the hospital. My grandparents on my dad's side have been divorced since the 60s, which means I never got to see them together.  I always felt like they never cliqued.  When I was a kid we would always spend half our time in Chicago at grandma’s and half our time at grandpa’s houses.  We would hang out with my grandpa’s wife Bobbi and my grandma’s husband Leo.  My cousin’s always called Leo by Pappa and my brother and I would use Leo.  My cousins lived in Chicago up until my cousin Gavi was born then they moved to Philadelphia.  I remember complaining to my dad that we never visited anywhere cool just Chicago and Kansas City.  I also remember Leo teaching me and my brother, Sam, how to climb a tree and play tennis on the back of the apartment wall.  My grandfather’s was kind of stuffy and boring, but we always enjoyed my grandmother’s.  She had games and Disney Movies and toys.  Leo always made it a ton of fun to visit.  I’m not sure a lot of people knew this fact, but when I was 16 years old I saw my grandfather on my mother’s side slowly die of congestive heart failure in a nursing home.  When I was 18 I saw my grandmother on my mother’s side slowly die of an infection over a 30 day period.  I hate hospitals and I hate nursing homes, but sometimes you have to suck in your gut and be strong.  I tried to do that this weekend and I hope I did an ok job.  I really hope things get better, but if they don’t I really appreciate everything he has given me in life.  I am a terrible tennis player and I haven’t climbed a tree in ages, but they both encouraged me when encouragement from my parents was lacking.  It was Father’s Day today and I have to at least pay tribute to Leo Morris, because he has been a good person to me all my life.  Technorati Tags: TechEd,Grandparents,Father's Day

    Read the article

  • On Contract Employment

    - by kerry
    I am going to post about something I don’t post about a lot, the business side of development.  Scott at the antipimp does a good job of explaining how contracts work from a business perspective.  I am going to give a view from the ground. First, a little background on myself.  I have recently taken a 6 month contract after about 8 years of fulltime employment.  I have 2 kids, and a stay at home wife.  I took this contract opportunity because I wanted to try it on for size.  I have always wondered whether I would like doing contracts over fulltime employment.  So, in keeping with the theme of this blog I will write this down now so that I may reference it later. ALL jobs are temporary! Right now you may not realize it, most people simply ignore it, but EVERY job is temporary.  Everyone should be planning for life after the money stops coming in.  Sadly, most people do not.  Contracting pushes this issue to the forefront, making you deal with it.  After a month on a contract, I am happy to say that I am saving more than I ever saved in a fulltime position.  Hopefully, I will be ready in case of an extended window of unemployment between contracts. Networking I find it extremely gratifying getting to know people.  It is especially beneficial when moving to a new city.  What better way to go out and meet people in your field than to work a few contracts?  6 months of working beside someone and you get to know them pretty well.  This is one of my favorite aspects. Technical Agility Moving between IS shops takes (or molds you into) a flexible person.  You have to be able to go in and hit the ground running.  This means you need to be able to sit down and start work on a large codebase working in a language that you may or may not have that much experience in.  It is also an excellent way to learn new languages and broaden your technical skill set.  I took my current position to learn Ruby.  A month ago, I had only used it in passing, but now I am using it every day.  It’s a tragedy in this field when people start coding for the joy and love of coding, then become deeply entrenched in their companies methods and technologies that it becomes a just a job. Less Stress I am not talking about the kind of stress you get from a jackass boss.  I am talking about the kind of stress I (or others) experience about planning and future proofing your code.  Not saying I stay up at night worrying whether we have done it right, if that code I wrote today is going to bite me later, but it still creeps around in the dark recesses of my mind.  Careful though, I am not suggesting you write sloppy code; just defer any large architectural or design decisions to the ‘code owners’. Flexible Scheduling It makes me very happy to be able to cut out a few hours early on a Friday (provided the work is done) and start the weekend off early by going to the pool, or taking the kids to the park.  Contracting provides you this opportunity (mileage may vary).  Most of your fulltime brethren will not care, they will be jealous that they’re corporate policy prevents them from doing the same.  However, you must be mindful of situations where this is not appropriate, and don’t over do it.  You are there to work after all. Affirmation of Need Have you ever been stuck in a job where you thought you were underpaid?  Have you ever been in a position where you felt like there was not enough workload for you?  This is not a problem for contractors.  When you start a contract it is understood that you are needed, and the employer knows that you are happy with the terms. Contracting may not be for everyone.  But, if you develop a relationship with a good consulting firm, keep their clients happy, then they will keep you happy.  They want you to work almost as much as you do.  Just be sure and plan financially for any windows of unemployment.

    Read the article

  • most reliable linux terminal app / general procedures for process stability

    - by intuited
    I've been using konsole (KDE 4.2) for a while now but it crashed recently. Konsole is efficiently designed to use one instance for all of the windows for your entire X session. Extra-unfortunately, because of this ingenuity the crash brought down all the humpty-dumptys and their bashes and their bashes' applications and all the begattens' begattens all the way down to Jebodiah Springfield into one big flat nonexistent omelette. The fact that this app is capable of crashing under any circumstances is pretty disappointing. Although KDE 4.2 is not expected to be entirely stable -- and yes, I know, I should update my distro -- it's still a no-sell for me, since if at all possible, this sort of thing Shouldn't Happen to something that's likely to be a foundation for an entire working environment. Maybe this is arrogant and unrealistic, but if it's possible to have something more stable, I want it. So other than running under screen -- which is fun, nifty, and thus far flawless in its reliability, but which has some issues with not understanding certain keycodes -- I'm looking for ways to improve my environment's reliability. The most obvious strategy is to cast about for a more reliable console app. A standard featureset -- which to me includes tabbed windows, Unicode support, and a decent level of keyboard shortcut configuration -- is pretty much essential. I'm currently running gnome-terminal and roxterm, both of which have acceptable featuresets (pretty much identical, actually; I think rox is actually the superset), and neither of which have provided me with extensive, objective reliability data. Not that they were expected to. Other strategies are also welcome. Were I responding to this question I would perhaps suggest backgrounding critical tasks with & and/or disowning them so they don't come down with the global pandemic. And stuff like that.

    Read the article

  • Why is usable RAM less than total RAM?

    - by D Connors
    My girlfriend bought a laptop last week. It's a core 2 duo with 4 GB We installed vista 64bit, and one of the first things we did was right click on "My computer" to see gthe properties. Immediately we noticed something strange about her RAM, the line said: Installed memory (RAM): 4,00 GB (3,68 GB usable) I told her not to worry, thinking it must be something about the laptop hardware (considering her vista installation came from the same DVD as mine, and I never noticed anything like that on my 4 GB desktop). One hour ago, it got worse. We looked at Properties again, and it now says: Installed memory (RAM): 4,00 GB (2,98 GB usable) What does that mean? Are those 1,02 GB missing or being used by the system? EDIT: There is a possibility that the sytem information is wrong. I just noticed that it reports an intel T6500 processor, when it's actually a T6400. How can I find out how much RAM is really available to the system? EDIT2: Checking the resource monitors, it says 1003 MB are reserved for the hardware. Is that good or bad? Thanks

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >