Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 390/906 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • terminal tools and logs for debugging TCP issues

    - by kellogs
    I have a server which I am testing for functionality (not load, not stress) with tsung. 50 users / second, 100 total users. Judging from tsung (tsung is the testing framework) graphs, there TCP connections (red line) drops to 0 while the commenced user sessions (green line) does not. Server logs show nothing to be gripping onto, so I am speculating some kind of TCP issue. Should this be the case ? Where would I look further on the server, any logs / tools to be looking at ? Only SSH available, no GUI. > root@XMPP:~# cat /etc/lsb-release > DISTRIB_ID=Ubuntu > DISTRIB_RELEASE=11.10 > DISTRIB_CODENAME=oneiric > DISTRIB_DESCRIPTION="Ubuntu 11.10" Thank you

    Read the article

  • Why are network printers not available in the Add Printer Wizard...when run over a network?

    - by Kev
    From a Windows 2003 server machine I browsed the network to an XP client (\computername in Explorer) then double-clicked Printers and Faxes and then Add Printer. In the wizard, normally the second screen asks if you want to install a local printer or a network printer. Well, in this case, it seems to assume I want a local printer, because the second screen is what would normally be the third screen if you chose local printer and clicked Next. I want to install a network printer on a remote machine for its local users. Is this not possible? If not, why not?

    Read the article

  • Mac internet problems

    - by Bradley Herman
    Our office is set up with mostly macs (7 of them) but we do have a windows laptop and a windows desktop on the network as well. The network is configured with a modem going into a switch/router throughout the office to the computers, along with a wireless router. Everything runs fine most of the time, but periodically while using the web, certain sites will stop loading and timeout repeatedly. This usually lasts 20 minutes or so and can be incredibly annoying. Resetting the modem/router and/or rebooting the computer never helps. The weirdest part is that in almost every case, the websites are fine on our Windows machines. I frequently use github, google, Stack Overflow, and jQuery reference and I can count on the sites being unavailable to me at least once a day. While I can't get them to load, I can spin my chair around to the windows server behind me and load the sites just fine. Any idea what the hell could be going on here?

    Read the article

  • Finding Image resolution in PDF file?

    - by Dave
    I have a problem of having some users creating very large PDFs. On the other hands I have PDF sent from our fax machines that are really small in size and totally printable. My question is Is there any way I can find the resolution (DPI) of the PDF. I search the internet, could not find any answer. Checked the properties of the file, this information was not stored there, at least in my case. What is the optimum resolution of converting text file into image PDF. 96dpi, 300dpi or more ? Fun question. Can I resize a PDF which was scanned with high dpi into smaller dpi? I know some answers might not be available as I have already searched the internet and could not find answers. Note: My PDF are entirely images, text to images. I am also familiar with primoPDF (free) something you can experiment with

    Read the article

  • How to backup a remote VPS machine?

    - by morpheous
    I am considering opting for a VPS solution, with the server running Ubuntu server. I am pretty new to this, and I need to come up with a backup policy for my server data. Initial data is likely to be about 80Mb, and I expect the data to grow at approximately 5Mb to 10 Mb a day. Can anyone recommend: A backup/restore policy (best practises for a small startup) Which tools to use for backup? Another thing that is not clear to me is - where are the files backed up to normally (in the case of remote servers). If the files are backed up to the same machine (or even to another machine but with the same host), there is potentially, a single point of failure). How do people normally backup their server data, and is the probability of machine meltdown or the host company server farm "catching fire" so remote as not to be worth worrying about - especially for a small (read one man) startup like me?

    Read the article

  • Why does the EFI shell not detect my Windows DVD?

    - by Oliver Salzburg
    I'm currently looking into (U)EFI for the first time and am already really confused. I insert the Windows Server 2008 R2 Enterprise disc into the DVD-ROM and boot into the EFI shell. The shell will automatically list all detected devices, which are: blk0 :CDRom - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master)/CDROM(Entry0) blk1 :BlockDevice - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master) To my understanding, it should have already detected the filesystem on blk0 and should have mounted it as fs0. Why is that not happening? If I insert a USB drive, it gets mounted just fine. The board is an Intel S5520HC in case that makes a difference.

    Read the article

  • Multiple CD writer taking long to burn

    - by Mirage
    I have installed 6 DVD writers in Tower case. I am using Alcohol Software to burn multiple CDS. I have seen that about 4 dvd/cd writer finish recording early but some take long time finish and their speed is around 7x. Its not that those are the only writers doing that, some times other writer write slowly. But there are always 1 or 2 writer which takes about 25 min to write the 700Mb cd and some finish in 5 mins Why is that. All writers can write upto 40px speed. Which thing determines the speed

    Read the article

  • Laptop battery life drastically decreased compared to Windows 7

    - by Aron Rotteveel
    I am running Ubuntu 10.10 on my Dell Studio XPS 1640 and have about one hour of battery life in it, compared to about 2.5 hours running on Windows 7. This is with wireless and bluetooth on, but still, the difference seems incredible. What could be causing such a difference and is there a way to close the gap without losing core functionality? EDIT: here's some output from powertop. This is with bluetooth turned off and Wifi turned on. The output seems pretty normal to me, but as indicated, this is about 1 hour of battery life on a full battery... Wakeups-from-idle per second : 476.2 interval: 10.0s Power usage (ACPI estimate): 2.5W (1.2 hours) Top causes for wakeups: 30.0% (167.2)D chrome 21.0% (117.3) [extra timer interrupt] 13.9% ( 77.4) [kernel scheduler] Load balancing tick 3.4% ( 18.9)D xchat 7.1% ( 39.8) [iwlagn] <interrupt> 5.9% ( 32.9) AptanaStudio3 3.9% ( 21.6)D java 2.7% ( 14.9) [TLB shootdowns] <kernel IPI> 2.5% ( 14.1) docky 1.8% ( 10.0) nautilus 1.6% ( 9.0) thunderbird-bin 1.0% ( 5.5) [ahci] <interrupt> 0.9% ( 5.0) syndaemon 0.8% ( 4.3) [kernel core] hrtimer_start (tick_sched_timer) EDIT: after changing /proc/sys/vm/laptop_mode to 5 (it was set to 0), wakeups seem to have decreased, although usage still seems far too high: Wakeups-from-idle per second : 263.8 interval: 10.0s Power usage (ACPI estimate): 2.6W (0.9 hours) EDIT: I seem to have discovered the main cause: I was using the open source ATI Drivers. I recently installed the official ATI drivers and laptop battery life seems to have doubled since. EDIT: last edit. The previous 'solution' of installing the official ATI drivers turns out to be a non-solution. Although it does increase battery life, my laptop resolution is maxed out at 1200x800 after a reboot. (Please note that this problem does not need answering in this question as it is a seperate case)

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • PNG alpha rendered as black in Blender

    - by Camilo Martin
    I'm a Blender novice, so this is probably easy to fix. When I use a transparent PNG as a texture in Blender, the parts that should be transparent are rendered as black. This is especially confusing since in the material preview it looks as if the material would indeed be transparent. Here's a screenshot: This is the test texture, and in the right on top of a checkerboard:                        Here is the .blend file in case you want to check it:                                                      

    Read the article

  • Taking ownership of trustedinstaller files?

    - by P a u l
    vista32-sp1: I am unable to delete some files on my system that were installed with 'special permissions' by 'trustedinstaller'. I find the usual help suggestion to use 'takeown' is not working, all I get is access denied. I refuse to believe there isn't some way to delete these files, or that microsoft has finally acheived their perfect security filesystem. This is NOT a case of a file being locked by a process. If this is all it was, I could solve this by myself. I know there are some recommended unlocking programs and they might do some sort of file system trick, but I would like to know what my possible direct actions might be. If a 3rd party program can 'unlock' a file, I want to know the mechanism. But like I said 'takeown' at the command line is not working for this.

    Read the article

  • Just one client bound to address and port: does it make a difference broadcast versus unicast in terms of overhead?

    - by chrisapotek
    Scenario: I am implementing failed over for a network node, so my idea is to make the master node listens on a broadcast ip address and port. If the master node fails, another failover node will start listening on this broadcast address (and port) and take over. Question: My concern is that I will be using a broadcast IP address just for a single node: the master. The failover node only binds if the master fails, in other words, almost never. In terms of network/traffic overhead, is it bad to talk to a single node through a broadcast address or the network somehow is smart enough to know that nobody else is listening to this broadcast address and kind of treat it as a unicast in terms of overhead? My concern is that I will be flooding my network with packets from this broadcast address even thought I am just really talking to a single node (the master). But I can't use unicast because the failover node has to be able to pick up the master stream quickly and transparently in case it fails.

    Read the article

  • How do I diagnose a HP Notebook which has hardware issues?

    - by Rob
    My HP DV6t-2300 recently crashed while using FLV Player. It wouldn't start up after this so I had to do a hardware reset (remove power sources, hold down power button ~15 seconds, put back power sources). After this it would turn on, but start up would freeze in Ubuntu, Windows 7, Ubuntu Recovery Mode, and various Linux Live CDs. The only successful way to boot was in Windows 7 Safe Mode. The HP Customer Service was very polite, but they are trying to blame it on a corrupt operating system which is clearly not the case (since I have tried 4 operating systems and none work). I am thinking it might be the GPU since 1) I was watching movies when it crashed and 2) Windows Safe Mode might not use the dedicated GPU. I already ran Memory and HDD tests and there were no detected errors. Any ideas of what's wrong, or suggestions for tests that I should run in safe mode? Should I try reinstalling Windows 7 to convince HP that it's not the OS?

    Read the article

  • Compiz slow under proprietary nvidia driver

    - by gsedej
    Hi! I am using Ubuntu 10.10 and have problem with proprietary nvidia driver for my GeForce GTS 250. I have issue with poor Compiz performance. And there is also open-source "noueau" driver. Proprietary: I tried many versions but neither works fast on desktop. This means 30 FPS without heavy effects. Currently I am using version 270.18. Even normal desktop use feels bad (moving windows) In games (and 3D benchmark) it is really good! (Unigine Heaven works good!) Open-source "nouveau": Very fast on desktop with heavy effects (blur, ...). I have 300 FPS and more, even in Expo mode. Games were good but not as good as prop. And driver causes xorg to crash even the latest (ppa:xorg-edgers/nouveau), so I switched back to proprietary. I also have computer with Ubuntu 10.04, GeForce 8600GT and drivers around 185.x and Compiz works great there. There is similar question Nvidia proprietary driver performance in 10.10 Which version of nvidia (prop) driver is fast in Compiz in Ubuntu 10.10? How do you install a specific version of nvidia driver? Is it the case that each newer driver works slower on compiz?

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • CS3, Illustrator - Where Do X/Y Coordinates Measure from?

    - by nicorellius
    I have a project where there are several PDF files. I'm using Illustrator to make these. It seems the point/line of origin is inconsistent from image to image (file to file). Where is the point of origin, by default, in CS3 Illustrator? It would be nice if, while I was positioning images, I could just say, "OK, x coordinate is 5.5 inches in this document, so it is 5.5 in that one." But it seems this is not the case. Anyone know how Illustrator sets these parameters?

    Read the article

  • Ways to break the "Syndrome of the perfect programmer"

    - by Rushino
    I am probably not the only one that feel that way. But I have what I tend to call "The syndrome of the perfect programmer" which many might say is the same as being perfectionist but in this case it's in the domain of programming. However, the domain of programming is a bit problematic for such a syndrome. Have you ever felt that when you are programming you're not confident or never confident enought that your code is clean and good code that follows most of the best practices ? There so many rules to follow that I feel like being overwhelmed somehow. Not that I don't like to follow the rules of course I am a programmer and I love programming, I see this as an art and I must follow the rules. But I love it too, I mean I want and I love to follow the rules in order to have a good feeling of what im doing is going the right way.. but I only wish I could have everything a bit more in "control" regarding best practices and good code. Maybe it's a lack of organization? Maybe it's a lack of experience? Maybe a lack of practice? Maybe it's a lack of something else someone could point out? Is there any way to get rid of that syndrome somehow ?

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • Backup Dropbox to Amazon Glacier

    - by joekr
    i'm using Dropbox for Backup which means i keep all my files in my Dropbox folder (encrypted using encfs but that should not be relevant). I like this solution because it is automatic and keeps copies of my files on several machines at different locations. The only thing i could see go wrong is that Dropbox has some sort of bug that tells all my machines to delete the files. So currently i do a Backup of the Dropbox folder to an external Harddrive. With Amazon Glacier it seems affordable to automate Backup snapshots of my Dropbox. What i am looking for is a tool that will do this for me - the base case scenario would be that files would go from Dropbox (using their API) directly to Amazon as uploading the ~80GB from my home connection would take forever... Thanks!

    Read the article

  • Extracting the layer transparency into an editable layer mask in Photoshop

    - by last-child
    Is there any simple way to extract the "baked in" transparency in a layer and turn it into a layer mask in Photoshop? To take a simple example: Let's say that I paint a few strokes with a semi-transparent brush, or paste in a .png-file with an alpha channel. The rgb color values and the alpha value for each pixel are now all contained in the layer-image itself. I would like to be able to edit the alpha values as a layer mask, so that the layer image is solid and contains only the RGB values for each pixel. Is this possible, and in that case how? Thanks. EDIT: To clarify - I'm not really after the transparency values in themselves, but in the separation of rgb values and alpha values. That means that the layer must become a solid, opaque image with a mask.

    Read the article

  • An alternative to multiple inheritance when creating an abstraction layer?

    - by sebf
    In my project I am creating an abstraction layer for some APIs. The purpose of the layer is to make multi-platform easier, and also to simplify the APIs to the feature set that I need while also providing some functionality, the implementation of which will be unique to each platform. At the moment, I have implemented it by defining and abstract class, which has methods which creates objects that implement interfaces. The abstract class and these interfaces define the capabilities of my abstraction layer. The implementation of these in my layer should of course be arbitrary from the POV view of my application, but I have done it, for my first API, by creating chains of subclasses which add more specific functionality as the features of the APIs they expose become less generic. An example would probably demonstrate this better: //The interface as seen by the application interface IGenericResource { byte[] GetSomeData(); } interface ISpecificResourceOne : IGenericResource { int SomePropertyOfResourceOne {get;} } interface ISpecificResourceTwo : IGenericResource { string SomePropertyOfResourceTwo {get;} } public abstract class MyLayer { ISpecificResourceOne CreateResourceOne(); ISpecificResourceTwo CreateResourceTwo(); void UseResourceOne(ISpecificResourceOne one); void UseResourceTwo(ISpecificResourceTwo two); } //The layer as created in my library public class LowLevelResource : IGenericResource { byte[] GetSomeData() {} } public class ResourceOne : LowLevelResource, ISpecificResourceOne { int SomePropertyOfResourceOne {get{}} } public class ResourceTwo : ResourceOne, ISpecificResourceTwo { string SomePropertyOfResourceTwo {get {}} } public partial class Implementation : MyLayer { override UseResourceOne(ISpecificResourceOne one) { DoStuff((ResourceOne)one); } } As can be seen, I am essentially trying to have two inheritance chains on the same object, but of course I can't do this so I simulate the second version with interfaces. The thing is though, I don't like using interfaces for this; it seems wrong, in my mind an interface defines a contract, any class that implements that interface should be able to be used where that interface is used but here that is clearly not the case because the interfaces are being used to allow an object from the layer to masquerade as something else, without the application needing to have access to its definition. What technique would allow me to define a comprehensive, intuitive collection of objects for an abstraction layer, while their implementation remains independent? (Language is C#)

    Read the article

  • How to find a hidden streaming video/audio link and record it

    - by Stan
    I've been using 'URL snooper' to find the hidden streaming url. And feed that url to VLC to record streaming video/audio. But the VLC can't read those url. Then I also found that the url is like a floating url that changes every several hours. So the same audio station won't have same url. The streaming audio provider has bunches of audio stations and shuffle the link frequently. Is there any way to record the streaming media in this case? Please advise, thanks.

    Read the article

  • Ubuntu 12.04 Server ping gateway responds with destination host unreachable

    - by blckblttkd
    I consider myself fairly avid with Ubuntu and Linux, but this one has me stumped. I built up a Xen Server using Ubuntu 12.04 as the base operating system. It has multiple domUs running on it. My home network has a statically defined network where I got all the network connectivity going peachy. The server was moved to a permanent home this morning. So, the network configuration on the main system had to change. Again, another static network, but now I can't ping the upstream gateway from the host. As the VMs use this NIC over a bridge, they too are broken. Ping responds with "destination host unreachable." I simplified the networking down to a simple static network as seen below (no bridge or anything) just to get it to work. Here's the contents of my /etc/network/interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 216.7.188.228 gateway 216.7.188.225 netmask 255.255.255.240 broadcast 216.7.188.255 network 216.7.188.0 dns-nameservers 8.8.8.8 8.8.4.4 Here's the contents of route -n 0.0.0.0 216.7.188.225 0.0.0.0 UG 100 0 0 eth0 216.7.188.224 0.0.0.0 255.255.255.240 U 0 0 0 eth0 And the results of pinging the gateway: PING 216.7.188.225 (216.7.188.225) 56(84) bytes of data. From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable From 216.7.188.228 icmp_seq=1 Destination Host Unreachable Again, this worked in one network flawlessly (obviously with different parameters in the interfaces file). I did try using eth1 (as there are two NICS on the server (in case the MAC address got flipped on bootup). No success there. Yes, the cable is in the right port now :) Any thoughts? I appreciate the help!

    Read the article

  • Using AdSense to show ads to logged-in users

    - by John
    I know that you can grant authorization permissions to Google AdSense so that it can 'log in' and see what other logged in users can see (e.g. in a private forum), so that the ads it displays are better targetted. Extending this principle further: I am making a site which will show completely different content for each individual user (i.e. not 'common' content like a forum in which everybody sees essentially the same thing). You could think of this content as similar to the way each Facebook user has a different news feed, but it is the 'same' page. Complicating things further, the URLs for this site will be simple, e.g. '/home' and '/somepage', and will not usually include unique identifiers to differentiate between users (e.g. '/home?user=32i42'). My questions are: Is creating an account purely for AdSense to log in to the site with worth it in this case, seeing as it will be seeing it's own 'personalized' version and not any other user's? More importantly: is that against the Google AdSense Terms of Service? (I can't seem to figure that one out) How would you go about this problem?

    Read the article

  • How to handle user accounts for many sites running on same server

    - by Simon Courtenage
    Background to this question: I want to host multiple e-commerce sites on the same server, each with their own separate customer login application. Each site's login application needs to be secured by SSL. I'm unsure how best to handle this. For example, do I need to acquire a separate SSL certificate for each site (in which case, how do I do this dynamically, as the sites are created), or do I handle this using ONE login gateway-style application, which handles it on behalf of all the sites via a kind of transparent redirect? I'd be grateful for any pointers or advice. Thanks.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >