Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 589/832 | < Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >

  • Performance degrades for more than 2 threads on Xeon X5355

    - by zoolii
    Hi All, I am writing an application using boost threads and using boost barriers to synchronize the threads. I have two machines to test the application. Machine 1 is a core2 duo (T8300) cpu machine (windows XP professional - 4GB RAM) where I am getting following performance figures : Number of threads :1 , TPS :21 Number of threads :2 , TPS :35 (66 % improvement) further increase in number of threads decreases the TPS but that is understandable as the machine has only two cores. Machine 2 is a 2 quad core ( Xeon X5355) cpu machine (windows 2003 server with 4GB RAM) and has 8 effective cores. Number of threads :1 , TPS :21 Number of threads :2 , TPS :27 (28 % improvement) Number of threads :4 , TPS :25 Number of threads :8 , TPS :24 As you can see, performance is degrading after 2 threads (though it has 8 cores). If the program has some bottle neck , then for 2 thread also it should have degraded. Any idea? , Explanations ? , Does the OS has some role in performance ? - It seems like the Core2duo (2.4GHz) scales better than Xeon X5355 (2.66GHz) though it has better clock speed. Thank you -Zoolii

    Read the article

  • Anonymous access to SMB share hosted on Server 2008 R2 Enterprise

    - by bwerks
    Hi all, First off, I have read through this post and a whole slew of non-SF posts which seem to address the same or a similar problem, however I was still unable to fix my problem. I've got three machines in this situation: a domain-joined server that runs Server 2008 R2 Enterprise ("share server") a domain-joined workstation running XP Pro SP3 ("test server") a domain-unjoined test server running Server 2003 R2 SP2 ("workstation") The share server is exposing a share on the network that the test server must access--it's a Source/Symbol Server share for our debugging purposes. I believe visual studio simply accesses the the share with its own credentials in this case, meaning that the share must be accessible anonymously since the test server isn't joined to the domain and there's no opportunity to supply domain authentication. I've attempted a lot of things to avoid the authentication window when accessing the share: I've enabled the Guest account on the share server and given Guest full sharing/NTFS permissions for the share. I've given ANONYMOUS LOGON full sharing/NTFS permissions for the share. I've added my share to “Network Access: Shares that can be accessed anonymously” in LSP. I've disabled “Network access: Restrict anonymous access to Named Pipes and Shares” in LSP. I've enabled “Network access: Let Everyone permissions apply to anonymous users” in LSP. Added ANONYMOUS LOGON to “Access this computer from the network” in LSP. Added the Guest account to “Access this computer from the network” in LSP. Attempted to provision the share using the Share and Storage Management MMC snap-in. Unfortunately when I attempt to access the share from the test server, I still see the prompt and I'm forced to enter "Guest" manually. I also tried this workflow using the local administrator account on a workstation, and the same thing happens both with and without XP Simple File Sharing enabled. Any idea why I'm getting these results, or what I should have done differently?

    Read the article

  • javac compiler throwing error in CentOS 5.7

    - by Julio Menendez
    I'm trying to install Red5 on a vps running CentOS 5.7 in MediaTemple using this how-to (dv):Install Red5 Media Server but on step 7 I get this error: BUILD FAILED /usr/local/red5/build.xml:217: The following error occurred while executing this line: /usr/local/red5/build.xml:238: Error running /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin/javac compiler Any idea how to fix it? Already Google and several users have had this same issue but none posted the solution or they never solved. UPDATE: Some more details: running ant -v dist shows that is a memory problem: Caused by: java.io.IOException: Cannot run program "/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/bin/javac": java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:475) at java.lang.Runtime.exec(Runtime.java:610) at org.apache.tools.ant.taskdefs.Execute$Java13CommandLauncher.exec(Execute.java:862) at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:481) at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:495) at org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.executeExternalCompile(DefaultCompilerAdapter.java:522) ... 32 more Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:468) ... 37 more and I defined _JAVA_OPTIONS="-Xms64m -Xmx128m" and export ANT_OPTIONS=-Xmx128m and I tried with Xmx512m in both cases, no luck. Thanks!

    Read the article

  • Processing-time billing in Amazon EC2

    - by Rafael Almeida
    Hi all! I think my question is fairly basic, but I would like a clarification: in the Pricing part of AWS we can see that Amazon charges people around .10 by the 'instance computing hour'. I've seen in a blog post somewhere (can't remember where exactly, and even if I did I think it was in Portuguese anyway) that this way your minimum monthly payment would be $72 (= .10 $s/hour x 24 hours x 30 days). Is this correct? (I don't think it is!) In my understanding is that this 'virtual computing time' is only used when your machine is actually doing something (serving pages, serving the admin via ssh, whatever), so real billable usage would be less than 720 hours/month in most webserver scenarios. Is my view correct? If it is, then it leads me to another question: is it economically interesting to buy access to one of these instances for testing? I mean, would I have the 'freedom' to 'forget' about it for a month and receive a very-close-to-zero (as in, a few cents) bill? Do you do it/know of anybody who does? Any thoughts on the matter (as in, "yes, it's a good idea", or "yes, but there's this 'gotcha': ...", or "no, nobody does it because of...")? PS: sorry for the loong question text. I highlighted the main questions for easy view. Also, I'm not sure if this question is actually more than one and if it's desirable for the community, so, sorry if it is too! Thanks in advance!

    Read the article

  • FTP not listing directory NcFTP PASV

    - by Jacob Talbot
    I am attempting to setup Multicraft on my server, all is running smoothly however the FTP won't allow anyone to connect from a remote FTP client, where net2ftp will work smoothly from a remote location. I have included the transcript from my FTP client, Transmit below to give you an idea of what's going on. I have disabled iptables as well, and still no luck either way. Transmit 4.1.7 (x86_64) Session Transcript [Version 10.8.2 (Build 12C54)] (21/10/12 11:23 PM) LibNcFTP 3.2.3 (July 23, 2009) compiled for UNIX 220: Multicraft 1.7.1 FTP server Connected to ateam.bn-mc.net. Cmd: USER jacob.9 331: Username ok, send password. Cmd: PASS xxxxxxxx 230: Login successful Cmd: TYPE A 200: Type set to: ASCII. Logged in to ateam.bn-mc.net as jacob.9. Cmd: SYST 215: UNIX Type: L8 Cmd: FEAT 211: Features supported: EPRT EPSV MDTM MLSD MLST type*;perm*;size*;modify*;unique*;unix.mode;unix.uid;unix.gid; REST STREAM SIZE TVFS UTF8 End FEAT. Cmd: OPTS UTF8 ON 200: OK Cmd: PWD 257: "/" is the current directory. Cmd: PASV Could not read reply from control connection -- timed out. (SReadline 1)

    Read the article

  • When did my LaTeX files become TeX files!?

    - by andrz_001
    After transferring all my files onto a new machine, all files that were once LaTeX files (having the texnic center "T" icon) are now TeX files (having the TeX works icon --I don't remember installing that one!). But....the files are associated with the technic center! In other words, the files open with the texnic center, yet the "type of file" is "tex document". I'm using the same miktex distrubution and texnicCenter (both for windows xp) as on the old machine. Again, I don't know how/where I got this texworks on the new machine--assuming it came with windows. A question: Could these tex associations cause me any new surprises, anything unexpected? Like certain packages not working, etc? Because I can't have that!! Should I not fix it if it ain't broken!? For instance, one particular file yesterday would not produce any pdf output after compiling...after trying many things (other files compiled as they should have), I got to thinking to try it without the setspace for doublespacing!! And it worked. I have no idea why. Anyway.... Real question: How to revert to LaTeX file associations?? Rhetorical question: Would this question have better luck in ctan or stackoverflow? I guess I'll find out! Thank you wholeheartedly!

    Read the article

  • How to configure IIS 7.5 to allow special chars in Url for ASP.NET 3.5?

    - by Sebastian P.R. Gingter
    I'm trying to configure my IIS 7.5 to allow specials chars in the url for ASP.NET. This is important to support wide-spread legacy url's on a new system. Sample url: http://mydomain.com/FileWith%inTheName.html This would be encoded in the url and requested as http://mydomain.com/FileWith25%inTheName.html This simply works, when creating a new web in IIS 7.5, placing a file with the percentage sign in the file name in the web root and pointing the browser to it. This does not work, however, when the web site is an ASP.NET application. ASP.NET always returns a 400.0 - Bad Request error in the WindowsAuthentication module from the StaticFile handler, when pointing to that url. It however displays the requested url correctly and also resolves correctly to the correct physical file (the information from the field 'Physical Path' from the Server error page points to the physically available file). There are hints on how to enable this, so I followed the instructions on these websites step by step: http://dirk.net/2008/06/09/ampersand-the-request-url-in-iis7/ http://adorr.net/2010/01/configure-iis-to-accept-url-with-special-characters.html The second one actually sums up the information from the first post and adds some more information about x64 systems (we're running x64) and on an additional web.config change for this. I tried all that, and still can't get this running from an asp.net web application. And yes: I rebooted after applying the registry changes. So, what do I have to do in addition to the settings described in above posts, to support the legacy url's which contain percentage characters? Additional info: Application Pool mode is integrated. Push after some days. No idea anyone?

    Read the article

  • NoMachine NX window closes after establishing connection

    - by blackicecube
    I am trying to use nomachine nx server and client. But somehow it doen't work. What happens is the following: Client starts up Client authenticates with Server The NoMachine window appears for 2-4 seconds The NoMachine window exists Somehow a "closeEvent" is sent. Here's what I see in the log file: [Thu Sep 24 11:20:37 2009]: Starting nxcomp with options: 'NX 299 Switch connection to: NX mode: unencrypted options: nx/nx,options=/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/options:1022'. [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor: opened file: [/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/session] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[246] str=[Initializing X protocol compression] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Initializing X protocol compression] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[247] str=[Established the display connection] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Established the display connection] [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: QClipboard: Unknown SelectionClear event received. [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: LoginDialog: Agent found closing windows... [Thu Sep 24 11:20:38 2009]: LoginDialog: setting automatic reconnection to true. [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: LoginDialog: closeEvent received! [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called begin [Thu Sep 24 11:20:38 2009]: LoginDialog: stopAllTimers [Thu Sep 24 11:20:38 2009]: LoginDialog: stopProgressTimer [Thu Sep 24 11:20:38 2009]: Utility::getPreferencesFile: 'nxclient' - '/home/foo/.nx/config/nxclient.cfg' [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Called destructor for protocol class [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called end Anyone with a helpful idea?

    Read the article

  • Bandwidth Suggestion

    - by Campo
    I have been asked to analyze the bandwidth usage of a company and make a recommendation for upgrading their Internet connection(s). Here is the layout 3 DLS lines so it is 3x(6 Down, 1 Up Each) into a load balancer out to the office's network. 30 VOIP phones run on a T1 (1.5 Down, 1.5 Up) The users at the company are heavily uploading. It is my suspicion that the issue in slowdown is being cause by multiple people uploading and others not being able to get requests out for even simple http requests. My initial idea is to get them a fiber line with a 10 down and 10 up. What do others think on this plan? Will that be enough to host their network traffic? What do I do about the VOIP line afterward? The fiber is expensive and I know the T1 does a great job for their VOIP so I do not want to suggest a DSL line because I know it may not be sufficient. I would also like to save them some money if I can. Maybe even get a faster fiber line and forgo the T1. Though I know their load balance/switch can only handle 20MB/S throughput. Looking for some confirmation/suggestions on my plan. I am planning on going in to get some real diagnostic numbers. Any suggestions on software to use for that? Preferably Windows software.

    Read the article

  • Mac updated just now, postgres now broken

    - by Dave
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • Feasibility of Windows Server 2008 DFS replication over WAN link

    - by CesarGon
    We have just set up a WAN link that connects two buildings in our organisation. The link is provided by a 100-Mbps point to point line. We have a Windows Server 2008 R2 domain controller on each side of the link. Now we are planning to set up DFS for file services across the organisation. The estimated data volume is over 2 TB, and will grow at approximately 20% annually. My idea is to set up a file server in each building and install DFS so that all the contents stay replicated over the 100-Mbps link. I hope that this will ensure that any user will be directed to the closest (and fastest) server when requesting a file from the DFS folders. My concern is whether a 100-Mbps WAN link is good enough to guarantee DFS replication. I've no experience with DFS, so any solid advice is welcome. The line is reliable (i.e. it doesn't crash often) and our data transfer tests show that a 5 MB/sec transfer rate is easily achieved. This is approximately 40% of the nominal bandwidth. I am also concerned about the latency. I mean, how long will users need to wait to see one change on one side of the link after the change has been made on the other side. My questions are: Is this link between networks a reliable infrastructure on which to set up DFS replication? What latency times would be typical (seconds, minutes, hours, days)? Would you recommend that we go for DFS in this scenario, or is there a better alternative? Many thanks.

    Read the article

  • Pros and Cons of a proxy/gateway server

    - by Curtis
    I'm working with a web app that uses two machines, a BSD server and a Windows 2000 server. When someone goes to our website, they are connected to the BSD server which, using Apache's proxy module, relays the requests & responses between them and the web server on the Windows server. The idea (designed and deployed about 9 years ago) was that it was more secure to have the BSD server as what outside people connected to than the Windows server running the web app. The BSD server is a bare bones install with all unnecessary services & applications removed. These servers are about to be replaced and the big question is, is a cut-down, barebones server necessary for security in this setup. From my research online I don’t see anyone else running a setup like this (I don't see anyone questioning it at least.) If they have a server between the user and the web app server(s), it is caching, compressing, and/or load balancing. Is there anything I’m overlooking by letting people connect directly from the internet ** to a Windows 2008 R2 server that’s running the web application? ** there’s a good hardware firewall between the internet with only minimal ports open Thank you.

    Read the article

  • How to Log Into a Web App Simultaneously with Different Account?

    - by Ngu Soon Hui
    I want to log into a web application, using at least ten account names at one single point of time ( I am not trying to do anything illegal, so don't worry). AFAIK, each tab in Chrome will share the same session, therefore, for one machine, one can use Google Chrome to log in at most 2 accounts, one in normal mode, another in Incognito mode. Is there anyway I can log into multiple accounts? I know I can open up IE and Firefox ( probably Safari etc) and login, but this is not really scalable as the number of web browsers is finite. Edit: My application is a localhost application; it resides on my computer. So proxy may not be that useful, and you now probably understand why it's nothing illegal. Edit2: CookieSwap seems like a good idea, but the problem is that once I swap the cookie, all the tabs and the FF apps' cookie are swap as well. Can the swapping be done on a tab basis or on application basis, so that on a dual-monitor, I can see the different login side-by-side?

    Read the article

  • No sound through headset - only mic is working

    - by Kristis
    I noticed that no sound is being played to my headphones. The laptop has a conexant sound card together with the sounds apps provided. But the thing is that I also noticed that now instead of one playback device - 2 are presented: speakers and headphones. And while speakers do play sound nicely - even test sound are not played through headphone output. Also, headphone output does not have a jack assigned to it, while speakers have L R Rear panel Analog Jack(my laptop does not have a jack on the back - only on the right). Also - my headphones have a mic as well - and when I plug it in - the mic is working(using top panel digital jack),but the headphones themselves are not. And laptop is recognizing when audio device is plugged in. And I checked the headset on other devices - the headphones are working. I have tried updating drivers, rolling back drivers and completely uninstalling drivers and then restarting - nothing helped. I imagine that I somehow need to reconfigure the jack configuration just have no idea how and where. Any suggestions? Thanks

    Read the article

  • Active Directory + IIS + SQL + ASP.NET

    - by Amira Elsayed Ismail
    I have sent the following question to stackoverflow website I have installed Windows server 2008 r2 on a virtual machine, Can I install Active directory with domain controller + IIS + SQL server on the same machine? I want to make web application and this web application will authenticate users from Active Directory, the web application should be published on the server IIS and the users should access it remotely from their home using domain name of my machine, Someone tell me that its very wrong to have IIS and Active directory on the same machine I got the following Answer You can't use ActiveDirectory over the internet. At least not without something like a VPN as a middle man. Their home computers will not be joined to the domain, so there is no pass-through authentication. Yes, it's a bad idea to put AD on the web server. Why is too complex to get into in an answer here. Suffice it to say that even if you did do this, it's probably would not work the way you are thinking it should. It's not impossible to do this. For instance, many of the Microsoft "Small Businesss" products put IIS, AD, and SQL Server on the same server. But, you kind of have to know what you're doing to configure it securely. Then I add the following comment Thanks for ur reply.so what you think about the best way to do this as I didn't do anything like that before should I install active directory on a machine and IIS on another machine ? and what about SQL should I add it to the same server of active directory ? I didn't mentioned also that it will be Microsoft dynamics server that will access some information about work and i have to read data from axapta also ? also what is VPN and how can I use it to let users access my web application anywhere ? Sorry for my long questions and thanks in advance so please if anyone can help I will be thankful

    Read the article

  • Hiera datatypes wont load in Puppet

    - by Cole Shores
    I have spent a couple of days on this, followed the instructions on http://downloads.puppetlabs.com/docs/puppetmanual.pdf and even the Puppet Training Advanced Puppet manual. When I run a test against it, the results always come back as 'nil' and Im not sure why. I am running Puppet 3.6.1 Community Edition, with Hiera 1.2.1 on SLES 11. My puppet.conf file at /etc/puppet/puppet.conf consists of: [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl certificate_revocation = false [master] hiera_config=/etc/puppet/hiera.yaml reporturl = http://puppet2.vvmedia.com/reports/upload ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY # certname = dev-puppetmaster2.vvmedia.com # ca_name = 'dev-puppetmaster2.vvmedia.com' # facts_terminus = rest # inventory_server = localhost # ca = false [agent] # The file in which puppetd stores a list of the classes # associated with the retrieved configuratiion. Can be loaded in # the separate ``puppet`` executable using the ``--loadclasses`` # option. # The default value is '$confdir/classes.txt'. classfile = $vardir/classes.txt # Where puppetd caches the local configuration. An # extension indicating the cache format is added automatically. # The default value is '$confdir/localconfig'. localconfig = $vardir/localconfig my /etc/puppet/hiera.yaml consists of: :backends: yaml :yaml: :datadir: /etc/puppet/hieradata :hierarchy: - common - database I have a directory created in /etc/puppet/hieradata and within it contains: /etc/puppet/hieradata/common.yaml :nameserver: ["dnsserverfoo1", "dnsserverfoo2"] :smtp_server: relay.internalfoo.com :syslog_server: syslogfoo.com :logstash_shipper: logstashfoo.com :syslog_backup_nfs: nfsfoo:/vol/logs :auth_method: ldap :manage_root: true and /etc/puppet/hieradata/database.yaml :enable_graphital: true :mysql_server_package: MySQL-server :mysql_client_package: MySQL-client :allowed_groups_login: extranet_users does anyone have any idea what could be causing Hiera to not load the requested values? I have tried even restarting the Master. Thanks in advance, Cole

    Read the article

  • Thumbnail generation with imagemagick doesn't render the correct colors

    - by Bastien
    Generating thumbnails of PDFs with imagemagick sometimes renders incorrect colors. We're using an old version of imagemagick (6.5.7-8, that's the version installed on the heroku servers). Here is the command we're currently using: convert -size "725x1200>" -colorspace RGB -flatten -density 300 -quality 100 input.pdf output.jpg I've tried using different colorspaces like sRGB,YIQ,.. but none of them are rendering the color correctly. Using imagemagick-6.7.7-6 locally works so I've tried to bundle the 'convert' command within my application /bin directory, the command works but the result is still wrong, so it seems that the problem comes either from another imagemagick command used by 'convert' or from another library. Here's an example of the outputs: Correct output: http://i.stack.imgur.com/gf9eG.jpg Wrong output: http://i.stack.imgur.com/imUeD.jpg Strangely, with some pages of the same pdf the output is always correct. Any idea which library or command could be the issue, or if there is a proper set of options to pass to imagemagick to always get it right? Thanks in advance for your help.

    Read the article

  • Disaster recovery backup of files/photos for personal use

    - by Renesis
    I'm looking for the best method to store a backup of important files and 5+ years of digital photos that is safe from some type of fire/flood disaster in my home. I'm looking for: Affordable: Less than $100/yr or first-time cost. Reliable: At least a smaller chance of failing than there is of fire or flood Easy for initial backup and to add to, and at least semi-easy to recover. I recently purchased a small home safe for physical vitals. It was inexpensive, solid, and is fire/water safe. If I had a physical copy of the digital files, the safe would work fine for this, but I don't know what to store in it that adequately meets the requirements above. Hard drive - I read that the danger of it not spinning up makes a hard drive a bad choice for this type of storage, although it was my first thought and would definitely be the simplest choice - very easy to take out once a month and add files to. DVDs - Way too much of a hassle for both backup and restore. Tape - No idea on the affordability of this option Online - Given that I have at least 300GB already and ever-increasing megapixels means ever-bigger files, and my ISP upload is about 2Mb at the best, this just doesn't sound like a good option for me, but I could be convinced. Other - Have I missed something? Also, I'm already covered both for sync between computers (Dropbox) and a nightly backup of these files (External HDD). The problem with the nightly backup is obviously that it's always with the computer and in a disaster would be destroyed along with it. Is anyone else doing something similar? Is the HDD as poor of a choice as I read, or is it a feasible option? Maybe two to reduce the likelihood of failure?

    Read the article

  • OpenVPN Server - CPU is pegged out

    - by ericl42
    Hello, I am configuring OpenVPN to act as a SSL tunnel for a remote location. I have OpenVPN1 at our current location acting as a server then OpenVPN2 at the other location that is acting as a client but is also acting as a DHCP server to machines behind it so they are basically connected to the local LAN. Everything is set up fine and I can talk from location A to location B with no problems like everyone is local. I am however having some performance issues. OpenVPN1 CPU is pegged to 100% the entire time I am copying or doing any type of activity through the tunnel. I expect some CPU usage going up but nothing like this. It's really killing my performance. OpenVPN1 is running in ESX right now with 2 gig RAM and 4 procs with unlimited bursting capacity. I am using AES-192 encryption with a 1024 key. Any idea how I can get my CPU down on OpenVPN1 and my download/upload speeds higher between the tunnel? Thanks. edit: Turning down the logging helped boost the throughput a little bit, but I am still fairly shy of where I believe I should be. Also I am still maxed out on the CPU. Does anyone have any ideas? I am really stuck on this. Thanks.

    Read the article

  • Multi domain on my dedicated server with Apache2

    - by x4vier
    I setup a server with Ubuntu 10.04 server edition. It's works for a long time with a single domain name. Now i want to add another domain wich will pointed to a new directory. I tried to change my Apache2 configuration but it does not seems to work properly. Here is my /etc/apache2/sites-available/default <VirtualHost *:80> DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:80> ServerName mydomain.com ServerAlias www.mydomain.com DocumentRoot /var/www/mydomain </VirtualHost> here is my /etc/hosts 127.0.0.1 localhost **.***.133.29 sd-***.****.fr sd-**** **.***.133.29 mediousgame.com # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ****::0 ip6-localnet ****: :0 ip6-mcastprefix ****::1 ip6-allnodes ****::2 ip6-allrouters ****::3 ip6-allhosts With this configuration when i try to access to mydomain it redirect to the /var/www/ content. Do you have any idea to redirect to the right folder ?

    Read the article

  • How to test server throughput

    - by embwbam
    I've always used apache benchmark to try to get a rough idea of how many requests/second my server can handle. I read that it was good, and it seemed to work well. Enter node.js, which is fully event-based, so it never blocks. If I run apache benchmark on a simple hello world server it can handle 2500 requests per second or so. However, if I put a timeout in the hello world function, so that it responds after 2 seconds, apache benchmark reports a dramatically reduced throughput: about 50/s. I'm running 100 concurrent connections with ab. If I increase the concurrency, it goes up. This makes sense, because apache benchmark is basically sending out requests in batches of 100, which come back every 2 seconds. 100 requests / 2 seconds = 50 requests / second If I increase the concurrency to about 400 or 500, it starts to crash. I don't think I've hit node.js's limit, I think I'm hitting a wall in my operating system on the number of open file descriptors or sockets or something. Any way I can get a good guess about how many requests my server can handle? I want to make sure the test computer isn't the one causing the problem.

    Read the article

  • SBS 2003 boot stalls at acpitabl.dat

    - by John
    I have a SBS 2003 server running for 3 year without any problems, and few days ago it freezes during the boot. System is using two 500 Gb drives in RAID1 (Intel Matrix 7.5) After trying to load in safe mode, boot stops on acpitabl.dat. First idea was that there is a problem with RAID altough disk status was OK, and RAID status was Rebuild. I tried to boot with each drive, and one gives me the same problem, and the other drive is failing to load. Took both drives out, and checked it on a different machine. One drive is dead, other is without any problems. Returned the good drive back in SBS 2003 with changed status to Degraded, but the problem is still the same. I also have a clean SBS 2003 copy installed on this drive (previous installation), which loads smooth and quick. So, I believe the main problem is this installed version of SBS 2003. Did not make any hardware changes, did not make any updates (not sure about any automatic windows updates lately). Since there are tons posts about this problem, and no clear solution, I am trying to figure how to repair SBS 2003 installation, since there are some installed programs on this installation which I cannot re-install without additional issues.

    Read the article

  • Exchange backup verification shows no files

    - by Olaf
    [SBS2003SP2] If i read the exhange log it shows the backup contains no files just folders and the total size seems to be Ok. I i try to restore the folders are empty... But the 14 files that where backupped dissapeared in the verification log?! Other backups on the same medium turned out to be fine. Any idea what's wrong here? This is my log: Backup Status Operation: Backup Active backup destination: File Media name: "testbackup.bkf created 2-6-2010 at 11:25" Volume shadow copy creation: Attempt 1. Backup of "SERVER1\Microsoft Information Store\First Storage Group" Backup set #1 on media #1 Backup description: "Set created 2-6-2010 at 11:25" Media name: "testbackup.bkf created 2-6-2010 at 11:25" Backup Type: Normal Backup started on 2-6-2010 at 11:26. Backup completed on 2-6-2010 at 12:21. Directories: 4 Files: 14 Bytes: 26.842.932.104 Time: 55 minutes and 38 seconds Verify Status Operation: Verify After Backup Active backup destination: File Active backup destination: \backup\Server1\Backup Files\testbackup.bkf Verify of "SERVER1\Microsoft Information Store\First Storage Group" Backup set #1 on media #1 Backup description: "Set created 2-6-2010 at 11:25" Verify started on 2-6-2010 at 12:21. Verify completed on 2-6-2010 at 12:47. Directories: 4 Files: 0 Different: 0 Bytes: 26.842.932.104 Time: 25 minutes and 46 seconds

    Read the article

  • Guest can't access host windows network share

    - by Asteroza
    HI folks, I've recently run into a strange problem after upgrading to VMware player 3. Certain virtual machines (currently an XP and a VIsta VM) seem to have lost the ability to access the host (XP) network shared folders (SMB). Both VM machines are bridged networking, firewall is up. Host firewall is up. Host and guests use DHCP. All OS are workgroup connected. The Vista VM I am not completely sure, but the XP VM did have access to the host's network shared folders after the player upgrade. Then today it wouldn't work, network path can't be found. Now here's the wierd part. The host's network shared folders can be accessed properly by other PC's on the network (and as far as I know, no settings have been changed). The host is pingable from the guests, and name resolution works. The guests can access network shares on other PC's in the network, and access the internet. My Network Places shows the host PC, but double clicking on it takes a long time before it finally times out with an error. Doing a wireshark packet capture, the guest is sending out the protocol negotiation, and the host is sending a response, but after that the guest behaves like it didn't receive anything and is doing TCP retransmissions. Anybody have any idea what could be wrong? Yes I know I can drag and drop files or setup the special VMware shared folders, but I want to access the host just like any other network accessible shared folder. It just seems really odd when any other computer works, just not between the guest and host.

    Read the article

  • SSH freeze when UFW is enabled

    - by Cristian Vrabie
    I have a small Ubuntu 10.10 server and i recently noticed a weird behavior (not sure if it was happening before). If I have ufw enabled (with default deny all in, allow all out, allow all http, allow all on a random port i use for ssh) when i perform some actions in a ssh sesion, the ssh console completely freezes. The server continues to work and if i close the console i can start another ssh session. This happens no matter from where I log in (tried from another ubuntu and a mac). The actions are fairly reproducible, for example vim some config files (though vim-ing other files works), cat some other file, etc. The freeze never happens if ufw is disabled. Any idea what's going on? Thanks! Cristian Addition: if you're wondering, yes, I have TcpKeepAlive on yes and I doubt is related (it would happen with ufw disabled too) As requested: my ufw conf below. Also, i don't know if it has something to do but the server has 2 ips. On one is configured the ssh domain, and on one to serve hhtp (via apache2) Status: active Logging: on (low) Default: deny (incoming), allow (outgoing) New profiles: skip To Action From -- ------ ---- 19922/tcp ALLOW IN Anywhere 9418/tcp ALLOW IN Anywhere 80/tcp ALLOW IN Anywhere 443/tcp ALLOW IN Anywhere

    Read the article

< Previous Page | 585 586 587 588 589 590 591 592 593 594 595 596  | Next Page >