Search Results

Search found 23182 results on 928 pages for 'worst case'.

Page 449/928 | < Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >

  • How to Architect a system on AWS for scaling (with a MySQL back-end)

    - by Edan Maor
    I'm trying to understand how to architect an Amazon Web Services application. As I understand it, the whole point of using something like AWS is to make the eventual scaling easier, so I'm trying to understand how to do that. I have an instance, running off of EBS (EBS-based instance, not a regular instance). My application (a Django app) uses MySQL as a back-end. So the question is, where am I supposed to install the MySQL? Do I install it on the same instance? In which case, as far as I can tell, I can't simply create more server instances from that image. Or am I supposed to simply spin up another server as a DB server, and run off of that? Thanks for any help!

    Read the article

  • Run ClickOnce Application from CLI

    - by Badger
    I am working on an auto-install script for where I work and we have a ClickOnce type application we use from a vendor. I have looked into it and we can't automate the install but we would like to be able to at least start the install automatically. I have tried rundll32.exe dfshim.dll,ShOpenVerbApplication "%SOFTWARE%\ToolsApp.application" but it gives me an error about an invalid URI. What would probably be the easiest is to use whatever program Windows has (Windows XP in our case) to run the default "handler" for the file. I don't know if any such thing exists, but that is what comes to mind.

    Read the article

  • One-way platform collision

    - by TheBroodian
    I hate asking questions that are specific to my own code like this, but I've run into a pesky roadblock and could use some help getting around it. I'm coding floating platforms into my game that will allow a player to jump onto them from underneath, but then will not allow players to fall through them once they are on top, which require some custom collision detection code. The code I have written so far isn't working, the character passes through it on the way up, and on the way down, stops for a moment on the platform, and then falls right through it. Here is the code to handle collisions with floating platforms: protected void HandleFloatingPlatforms(Vector2 moveAmount) { //if character is traveling downward. if (moveAmount.Y > 0) { Rectangle afterMoveRect = collisionRectangle; afterMoveRect.Offset((int)moveAmount.X, (int)moveAmount.Y); foreach (World_Objects.GameObject platform in gameplayScreen.Entities) { if (platform is World_Objects.Inanimate_Objects.FloatingPlatform) { //wideProximityArea is just a rectangle surrounding the collision //box of an entity to check for nearby entities. if (wideProximityArea.Intersects(platform.CollisionRectangle) || wideProximityArea.Contains(platform.CollisionRectangle)) { if (afterMoveRect.Intersects(platform.CollisionRectangle)) { //This, in my mind would denote that after the character is moved, its feet have fallen below the top of the platform, but before he had moved its feet were above it... if (collisionRectangle.Bottom <= platform.CollisionRectangle.Top) { if (afterMoveRect.Bottom > platform.CollisionRectangle.Top) { //And then after detecting that he has fallen through the platform, reposition him on top of it... worldLocation.Y = platform.CollisionRectangle.Y - frameHeight; hasCollidedVertically = true; } } } } } } } } In case you are curious, the parameter moveAmount is found through this code: elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds; float totalX = 0; float totalY = 0; foreach (Vector2 vector in velocities) { totalX += vector.X; totalY += vector.Y; } velocities.Clear(); velocity.X = totalX; velocity.Y = totalY; velocity.Y = Math.Min(velocity.Y, 1000); Vector2 moveAmount = velocity * elapsed;

    Read the article

  • How to reference individual cells in Excel to variable data from records in an external SQL table

    - by user273476
    I have a SQL table containing date oriented financial data eg. multiple daily records with fields for Date, Account code and Value. I want to set up dynamic links (formulas) from cells in an Excel speadsheet to this data so when the spreadsheet is loaded the data is fetched from all the relevant records. The spreadsheet has the Account codes on the x axis and Dates on the y. Each day the SQL table has new data in it for the new day and I want the spreadsheet to reference this new data for the column for the new day. Any ideas? I have seen how you can generally bring in data from a SQL table (in our case using ODBC as it is not MS SQL) but the data is not simply bringing in multiple records as you would a CVS file but specific records in the SQL table referencing to specific cells and columns in the spreadsheet.

    Read the article

  • Depending on fixed version of a library and ignore its updates

    - by Moataz Elmasry
    I was talking to a technical boss yesterday. Its about a project in C++ that depends on opencv and he wanted to include a specific opencv version into the svn and keep using this version ignoring any updates which I disagreed with.We had a heated discussion about that. His arguments: Everything has to be delivered into one package and we can't ask the client to install external libraries. We depend on a fixed version so that new updates of opencv won't screw our code. We can't guarantee that within a version update, ex from 3.2.buildx to 3.2.buildy. Buildy the function signatures won't change. My arguments: True everything has to be delivered to the client as one package,but that's what build scripts are for. They download the external libraries and create a bundle. Within updates of the same version 3.2.buildx to 3.2.buildy its impossible that a signature change, unless it is a really crappy framework, which isn't the case with opencv. We deprive ourselves from new updates and features of that library. If there's a bug in the version we took, and even if there's a bug fix later, we won't be able to get that fix. Its simply ineffiecient and anti design to depend on a certain version/build of an external library as it makes our project difficult in the future to adopt to new changes. So I'd like to know what you guys think. Does it really make sense to include a specific version of external library in our svn and keep using it ignoring all updates?

    Read the article

  • Should I care about Junit redundancy when using setUp() with @Before annotation?

    - by c_maker
    Even though developers have switched from junit 3.x to 4.x I still see the following 99% of the time: @Before public void setUp(){/*some setup code*/} @After public void tearDown(){/*some clean up code*/} Just to clarify my point... in Junit 4.x, when the runners are set up correctly, the framework will pick up the @Before and @After annotations no matter the method name. So why do developers keep using the same conventional junit 3.x names? Is there any harm keeping the old names while also using the annotations (other than it makes me feel like devs do not know how this really works and just in case, use the same name AND annotate as well)? Is there any harm in changing the names to something maybe more meaningful, like eachTestMethod() (which looks great with @Before since it reads 'before each test method') or initializeEachTestMethod()? What do you do and why? I know this is a tiny thing (and may probably be even unimportant to some), but it is always in the back of my mind when I write a test and see this. I want to either follow this pattern or not but I want to know why I am doing it and not just because 99% of my fellow developers do it as well.

    Read the article

  • How soon does nginx's token bucket replenish when limiting at requests per minute?

    - by Michael Gorsuch
    We've decided that we want to experiment and limit requests per minute instead of requests per second on our sites. However, I am confused by the burst parameter in this context. I am under the impression that when you use the 'nodelay' flag, the rate limiting facility acts like a token bucket instead of a leaky bucket. That being the case, the bucket size is equal to the burst parameter, and every time that you violate the policy (say 1 req/s), you have to put a token in the bucket. Once the bucket is full (being equal to the burst setting), you are given a 503 error page. I am also under the impression that once a violator stops going against the policy, a token is removed from the bucket at a rate of 1 token/s allowing him to regain access to the site. Assuming that I have the above correct, my question is what happens when I start regulating access per minute? If we chose 60 requests per minute, at what rate does the token bucket replenish?

    Read the article

  • Reverse lookup of inode/file from offset in raw device on linux and ext3/4?

    - by lilinjn
    In linux, given an offset into a raw disk device, is it possible to map back to an partition + inode? For example, suppose I know that string "xyz" is contained at byte offset 1000000 on /dev/sda: (e.g. xxd -l 100 -s 1000000 /dev/sda shows a dump that begins with "xyz") 1) How do I figure out which partition (if any) offset 1000000 is located in?(I imagine this is easy, but am including it for completeness) 2) Assuming the offset is located in a partition, how do I go about finding which inode it belongs to (or determine that it is part of free space) ? Presumably this is filesystem specific, in which case does any one know how to do this for ext4 and ext3?

    Read the article

  • IPv4 NameVirtualHost, IPv6 VirtualHost

    - by MadHatter
    Like many of us, I have an apache server (2.2.15, plus patches) with a lot of virtual hosts on it. More than I have IPv4 addresses, to be sure, which is why I use NameVirtualHost to run lots of them on the same IPv4 address. I'm busily trying to get everything I do IPv6-enabled. This server now has a routed /64, which gives me an awful lot of v6 addresses to throw around. What I'm trying to find is a simple way to tell each v4-NameVirtualHost that it should also function as a VirtualHost on a unique ipv6 address. I really, really don't want to have to define each virtual host twice. Does anyone know of an elegant way to do this? Or to do something comparable, in case I've embedded any dangerously-ignorant assumptions in my question?

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • monitoring TCP/IP performance on Solaris

    - by Andy Faibishenko
    I am trying to tune a high message traffic system running on Solaris. The architecture is a large number (600) of clients which connect via TCP to a big Solaris server and then send/receive relatively small messages (.5 to 1K payload) at high rates. The goal is to minimize the latency of each message processed. I suspect that the TCP stack of the server is getting overwhelmed by all the traffic. What are some commands/metrics that I can use to confirm this, and in case this is true, what is the best way to alleviate this bottleneck?

    Read the article

  • NVIDIA Driver Crashing on Custom-Built Windows 7 PC

    - by srunni
    I've got a custom-built PC with these specs: Fractal Design Define R3 ASUS P8Z77-V motherboard Intel Core i7-2700k with Thermalright HR-02 Macho Cooler NVIDIA GeForce GTX 560 with Arctic Cooling Accelero Twin Turbo II Cooler Crucial M4 128 GB SSD 1 TB Hitachi HDD G.SKILL Ares Series 2x8 GB RAM (x2) SeaSonic 520W PSU Windows 7 Ultimate SP1 (x64) NVIDIA driver version 301.42 Upon building the PC, I overclocked the CPU (but never the GPU), and there were no problems for 1-2 months. Then I started getting crashes with this error when doing anything that's computationally or graphically intensive: I un-overclocked the CPU, but that hasn't fixed anything. This is what the inside of my case looks like: I'd appreciate any guidance on resolving this problem. I did get some of the thermal paste on the graphics card when installing the aftermarket cooler, but there were no issues for a month or two. Update I did a clean install and the issue persisted - looks like it's a hardware issue. I will try removing/cleaning/reseating all the parts.

    Read the article

  • Windows Server 2008 R2 Maximum Processor Limit Confusion

    - by Stevoni
    As I was looking through the Windows Server 2008 R2 specifications, I saw that the maximum supported processors is 64 sockets for Datacenter addition. This puts the maximum number or cores at 256 (if all sockets are quad cores), which I think it's just silly, but whatever. And now the questions: How does one set something like that up? (Obviously not for me, but humor me) Are there multiple dual socket motherboards running in a giant case with a ton of memory? How does the OS see all of the CPUs if they're on different boards? What would be a real world example of a need to have 64 sockets attached to one operating system vs 32 2 socket servers?

    Read the article

  • DOS application to allow remote management of files over serial link

    - by tomlogic
    Harken back to the days of DOS. I have an embedded DOS handheld device, and I'm looking for a tool to manage the files stored on it. I picture an application I can launch on the device that opens COM1 up for commands to get a directory listing, send/receive files via x/y/zmodem, move/delete files, and create/move/delete directories. A Windows application can then download a recursive file listing and then manage those files (for example, synchronizing with a local directory). Keep in mind that this is DOS -- 8.3 filenames, 640K of RAM and a 19200bps serial link (yuk!). I'd prefer something with source in case we need to add additional features (for example, the ability to get a checksum of a file for change detection). Now that I've written this description, I realize I'm asking for something like LapLink or pcAnywhere. Norton no longer sells DOS versions of pcAnywhere and LapLink V for DOS seems pricy at $50. Are you aware of any similar apps from those good old days?

    Read the article

  • Computer Says No: Mobile Apps Connectivity Messages

    - by ultan o'broin
    Sharing some insight into connectivity messages for mobile applications. Based on some recent ethnography done my myself, and prompted by a real business case, I would recommend a message that: In plain language, briefly and directly tells the user what is wrong and why. Something like: Cannot connect because of a network problem. Affords the user a means to retry connecting (or attempts automatically). Mobile context of use means users use anticipate interruptibility and disruption of task, so they will try again as an effective course of action. Tells the user when connection is re-established, and off they go. Saves any work already done, implicitly. (Bonus points on the ADF critical task setting scale) The following images showing my experience reading ADF-EMG Google Groups notification my (Android ICS) Samsung Galaxy S2 during a loss of WiFi give you a good idea of a suitable kind of messaging user experience for mobile apps in this kind of scenario. Inline connection lost message with Retry button Connection re-established toaster message The UX possible is dependent on device and platform features, sure, so remember to integrate with the device capability (see point 10 of this great article on mobile design by Brent White and Lynn Hnilo-Rampoldi) but taking these considerations into account is far superior to a context-free dumbed down common error message repurposed from the desktop mentality about the connection to the server being lost, so just "Click OK" or "Contact your sysadmin.".

    Read the article

  • Extend university wifi network [migrated]

    - by asfasdoiuh ouhouhouh
    i live in a university campus and i can get wifi signal on the outside of my window but not in the house. The solution i use at the moment is a usb wifi dongle outside connected to my laptop but the lack of an internal antenna make the connection quite unreliable at times. So i was trying to find another solution to improve the reception of my network. One idea is to setup a router on the outside (in a place with stronger signal) and redirect the connection inside the house with an ethernet cable but the problem is that our Uni Wifi is managed by a capitve portal (BlueSocket with DNS redirection to login page) and the authentication has to happen on the mac address that connect to the net (so the client appliance in this case). If I use a router with Mac-Clone capability i will be able to be redirected trough the captive portal on my laptop computer and login from there or i need to setup my router to fill in the login page by itself? There are other hardware/software solutions i can use to get what i want? Thank you all

    Read the article

  • Hostmonster can't change domains around?

    - by loneboat
    (question imported from http://superuser.com/q/204439/53847 ) Horrible title, but I couldn't think of a succinct way to summarize it to fit. I have HostMonster for my web hosting. I have several domain names under the same account (using the same web space, IP address, etc...). Every HM account has one domain set up as the "main domain", and all other domains are "secondary". The only way I have ever encountered this being an issue is in trying to use HTTPS - since (from my limited understanding) HTTPS encrypts headers, you can't route HTTPS requests to different virtual hosts on a server - only unencrypted requests, since it must look in the request to know where to route it. When I registered for my account, I only had one domain name (A). I have since added domain names (B), (C), (D), etc... At one point I switched domain name (B) to be my "main" domain name - so I could use HTTPS with it. I have since sold domain name (B), and would like to make domain name (A) my "main" one again (as it was before), but HM support says, "no, once a domain name has been a 'main' domain name on an account once, we can't set it up to be a 'main' domain name again. You're welcome to use domains (C), or (D), though.". They tell me the only way to reuse domain (A) as a "main" domain would be to set up a new account and transfer over all my files. I'm confused here. If I have domains (D), (E), and (F), they say I'm welcome to make one of them my new main domain name, just never (A) again, since I've already "used" it once. Calls to support only reveal that they can't let me do it because doing so would somehow "break" my account. Can anyone think of any good reason why this should be so? The only thing I can think is that maybe they're using the domain names as keys in some database or something? But if that's the case, that's ridiculous - they need to reorganize their databases!

    Read the article

  • Logging violations of rules in limits.conf

    - by PaulDaviesC
    I am trying to log the details of the programs that where failed due to the limit cap defined in the limits.conf. My initial plan was to do it using the audit system. The idea was to track the system calls related to limits in the limits.conf that where failed. However the problem with this approach is that , it is not possible to track the violations of cpu time, since that violation do not involve failure of system calls. In the case of CPU time , one thing happens is that the program which violated the cpu time will be delivered a SIGXCPU. So my question is how should I go about logging the programs that violated CPU time? Also is there any limits.conf specific logs available?

    Read the article

  • Is version history really sacred or is it better to rebase?

    - by dukeofgaming
    I've always agreed with Mercurial's mantra, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it).

    Read the article

  • How to automount NTFS usb sticks on Xubuntu 12.10?

    - by netimen
    I'm running the Xubuntu 12.10 on a Lenovo T520 laptop. If I plug a FAT formatted usb stick, it's mounted automatically, but if I plug in a NTFS formatted one, I have to mount it manually. How to make NTFS usb sticks to mount automatically when plugged? My /etc/fstab in case it helps: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/sda1 / ext4 errors=remount-ro,user_xattr 0 1 # swap was on /dev/sda5 during installation UUID=cd221c3e-44a8-459e-9dfb-04787f1cd0b6 none swap sw

    Read the article

  • Basic Puppet installation with Solaris 11.2 beta

    - by user13366125
    At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry. However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish. Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately. (more)

    Read the article

  • Laptop white screen on power-up. Still displays via HDMI output

    - by Inno
    my wife's laptop recently started displaying a white screen. It doesn't show post or anything, just a white screen when it's powered on. However, it works normally with HDMI output to our television. I took it apart and fiddled with both ends of the display cable, but I either didn't fiddle correctly or that's just not the problem. I also noticed that the screen won't turn off anymore when the laptop is closed. Is there a name for the mechanism that controls this function, so I can try and locate it? My guesses are that the problem lies with the screen itself or the display cable, but I'm curious if there's anything else I might be overlooking. Also of note is that the left hinge is partially broken. The corner of the plastic computer case broke off, so the hinge is exposed and doesn't stay in place. I've tried holding it in place, wiggling it around, tapping various parts of the computer, but the white screen remains.

    Read the article

  • How to find what is written to filesystem under linux

    - by bardiir
    How can i find out what processes write to a specific disc over time? In my particular case I got a little homeserver running 24/7 and I included a script in the crontab to shutdown all drives that are not used (no change in /proc/diskstats for 15 minutes). But my system disc won't come down at all. I'm suspecting logs but it's probably not only logs writing to the filesystem on the system disk and I don't want to go all the way moving the logfiles to something else just to find out the disc still doesn't spin down and there's nothing i can do against it.

    Read the article

  • Finding the best practice for a game simulating tool

    - by Tougheart
    I'm studying Java right now, and I'm thinking of this tool as my practice project. The game is "League of Legends" in case anyone knows it, I'm not actually simulating the game as in simulating game play, I'm just trying to create a tool that can compare different champions to each other based on their own abilities and items bought inside the game. The game basics are: Every player has a champion in a team of 5 players playing against another team. Each champion has a different set of abilities (usually 4) that s/he uses to do damage to opposing champions. Each champion gets stronger by buying different items, increasing the attack it deals or decreasing the damage received. What I want to do is to create a tool to be used outside the game enabling players to try out different builds for their champions and compare the figures against other champions they usually fight against. The goal is to enable players get a deeper understanding of the different item combinations (builds) that can be used during the games, instead of trying them out in real games which can be somehow very time consuming. What I'm stuck at is the best practice I should follow to make this possible using Java, I can't figure out which classes should inherit from which, should I make champions and items specs in the code or extracted from other files, specially that I'm talking about hundreds of items and champions to use in that tool. I'm self studying Java, and I don't have much practice at it, so I would really appreciate any broad guidelines regarding this, and sorry if my question doesn't fit here, I tried to follow the rules. English isn't my native language, so I'm really sorry if I wasn't clear enough, I would be more than happy to explain anything that's not understood.

    Read the article

  • Updating ATI HD 5970 Graphics card - version errors?

    - by user55406
    I'm having an issue...My system specs is: Intel i7 960 6GM Corair XMS RAM ATI HD5970 graphics card Intel dx58so motherboard Cooler Master HAF 922 case 1.5TB Seagate hard drive Windows Vista x86 (32-bit). Here is my issue: when I go to AMD/ATI website to update my graphics card - it doesn't. when I type DxDiag and then click on display it tell me my version is 8.17.0 and its on 10.10.0 for the latest version. How can I get 8.17.0 too 10.10.0? I figure it would have done that after I updated the driver for my graphics card. Thanks.

    Read the article

< Previous Page | 445 446 447 448 449 450 451 452 453 454 455 456  | Next Page >