Search Results

Search found 2283 results on 92 pages for 'resume improvement'.

Page 65/92 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • what would be a good way to implement/render a 2d tiled map for a browser game?

    - by jj_
    I've made this little rpg ruby game I did while learning and now I'd like to make it into a browser game. I've already set up Sinatra framework to serve it, so what I am looking for, before everything else, is a way to represent the game map in browser (location attributes are stored in db). A new map is randomly generated by code for each new game at each game start. For now forget db, and let's say a map (say 100x100 "squares") is stored as a tridimensional array. (x,y, ...) Last "dimension" of array stores who & what is at that map cell: a player, a building, whatever. So all I have to do is render those "squares" or array cells to a 2d tiled map in the browser. The map does not need to refresh or be dynamically fetched as you scroll it, (at least at this stage of development) but, a technology which would allow me to do so in future would be a good reason for choosing it. Things that I thought of: html tables, html5 canvas, some js framework which is designed exactly with this purpose (which I do not know of = please advice). Yes I know about gamequery-js framework, but I've never used it, and I don't know if it's going to slow down everything down to inusability as I'm adding new features (scrolling, ajax). I really don't know of any other alternatives.. maybe there are lighter approaches? Easier or more minimalistic ways ? More targeted js framework which is the right tool for the job? Maybe just some html canvas code, or even simple image maps, or images with absolute positioning will be enough? The thing is I'd like to start simple, and then gradually make it better, so, as I said before, I'd prefer something that will give me room for improvement or is headed toward new web tendencies but which will also give me a bit of gratification in the beginning :) So.. advices are needed! And appreciated! :) Thanks p.s. Flash is excluded because I don't like it.

    Read the article

  • How can I improve the battery life under 12.04 on my Inspiron 14z? [duplicate]

    - by cfogelberg
    This question already has an answer here: Tips to extend battery life for laptops and notebooks 24 answers How do I improve the battery life of my Inspiron 14z under Ubuntu 12.04? This laptop gets 4-5 hours of battery life using Windows (e.g. here). I've removed Windows, installed Ubuntu 12.04 and the initial battery life was only 2 hours. With some tweaks (described below) it's still only ~2.5 hours. For reference, the laptop is the latest model of the 14z: i5-3337U processor 32GB MSATA, 500GB HDD (5400rpm) AMD Radeon HD7570M graphics card I have put ext4 partitions on both the SSD and the HDD, and have mounted / to the SSD and /home to the HDD. I also put a 24gb linux swap partition at the start of the HDD, though I figure this won't be used all that much (the laptop has 8gb of RAM). After googling around and reading Ask Ubuntu and other sites extensively, I have done the following steps, and they have improved the battery life ~30 minutes (exact improvement not clear, but battery life is still nowhere near 4-5 hours). Installed Jupiter (and set Performance to "Power Saving") Installed laptop-mode-tools cat /proc/sys/vm/laptop_mode now outputs 5 (previously it output 0) But it's not clear that this will help: AskUbuntu question Turned down the brightness of my screen from full to 1/3 Other things I have heard about but have not tried for fear of frying the laptop or my linux install: Add "pcie_aspm=force" at the end of the line with "quiet splash" in /boot/grub/grub.cfg Enable ALPM, but it may already be enabled in 12.04? Enable i915 framebuffer compression Use a propietary driver for the graphics card? Turn off the graphics card? (what would happen if I relied on the internal Intel bridge?) Use TLP? Spin down the HDD more aggressively (howto, but I think laptop-mode-tools does this already) The only other thing I've noticed is that plastic just above the F5, F6 and F7 keys gets really hot. According to Jupiter my CPU temperature is only 69 celsius and the System Monitor shows CPU load at 7% so I don't think it's the CPU. Maybe it's the graphics card? Also, I've set up MongoDB and LAMP on the machine as well. When I run powertop MongoDB is high in the list, but I'm not sure if that's relevant to battery life because I'm not actually doing anything with MongoDB most of the time. Edit - Additional info as requested $ lspci -nnk | grep -iEA3 "(graphics|vga)" 00:02.0 VGA compatible controller [0300]: Intel Corporation Ivy Bridge Graphics Controller [8086:0166] (rev 09) Subsystem: Dell Device [1028:057f] Kernel driver in use: i915 Kernel modules: i915 -- 02:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Thames [Radeon 7500M/7600M Series] [1002:6841] Subsystem: Dell Device [1028:057f] Kernel driver in use: radeon Kernel modules: radeon

    Read the article

  • How bad does it look to have left a job soon after starting? [closed]

    - by unitedgremlin
    I have a job I would like to leave. On advice from friends and parents I have stayed. Their primary concern is that it would look bad on my resume if I left only after a few months of joining. My concerns with the job are as follows: When I started it was preferred I provide and use my own equipment. Could be out of business in a few months from lack of cash flow Poor code quality: memory leaks and lack of error handling. The same mistakes continue to be made even though I have raised the issue. It has become evident that co-workers do not understand memory management rules of the platform and are not interested in learning them. Yet, there is still surprise from them when strange bugs continue to crop up. As a result don't feel I will learn from co-workers. Plus, fixing the the lingering bugs and trying to keep up on new feature development is like a game of whack-o-mole that never ends. I don't believe in the companies vision or its ability to execute on the ideas anymore. My ideas and suggestions for very small tweaks are quickly dismissed. And so more than half or so have come back as bugs that we end up needing to address. I have been told to wait on fixing bugs in codes until we can talk to the original author. I don't feel I am allowed to take initiative to just fix/change things and do what I think is best. Everything needs consensus even for a bug fix before any work is done. I am adopting a shut-up and just do what I am told approach to save myself from ulcers. Lots of meetings (I am personally not involved in all of them which is good) but the sheer amount reminds me of days at a big corporation. Why is everyone around me always meeting? It's a small company. I can count everyone on my toes and fingers. I can say with certainty I have no interest in working with any of them again. This is the first time I have truly worked with a group of so called "B and C players". Ultimately, I think it is my fault for not doing a better job evaluating the team and company before joining. But I have generated a better set of questions when probing companies in the future. My questions are: How bad does it look to have left a job soon (few months) after starting? What would be the best way to explain my concerns and reasons for leaving without badmouthing the company? Should I stick it out to what I believe will be the soon end of the company?

    Read the article

  • Is it smart to take a year off from school to get experience?

    - by user134147
    firstly I apologize if this question is not appropriate for the site, but I've seen other similar (though slightly deviant) questions on this sight before and I know the people here are the most qualified to answer my question. Anyways, I'm currently between my sophomore and junior years at a 4 year university, and after a bit of deliberation I've decided on computer science as a major (BA, by the way, as a BS would require me to stay at least an extra year the way our program is set up). I've been interested now in programming for a few months and I've developed a passion for it in a very short time. I began learning C++, migrating to Java recently when I learned my school focuses on this language. Now, I should mention that the concept of higher education has never sat well with me, so part of my motivation for wanting to take time off is to truly challenge myself and see what I can accomplish when I actually try at something. The autodidact in me finds it difficult to focus on my passions while trying to keep a high GPA in unrelated classes. However, I understand the times we live in and therefore would plan to complete my degree after this year. So my question is whether or not the skills I learn in a year off from college could justify the time off from school. Unfortunately, I don't believe I know enough yet to gain any professional experience (internship, etc.) so I would mostly focus my time on learning Java and another language, possibly Wordpress (to gain an understanding of web programming concepts as I have not yet decided what field I want to get into, and to make some money to fund my off-year), and to delve into security concepts, which also interest me. I'm hoping I could work on projects, such as simple applications or contributions to open source software during this time to enhance my resume once I do finish school, so I can find a job out of college easier. I do not want to be the new hire who knows nothing beyond the concepts of his Java textbooks. Does anyone have any input about these thoughts of mine, or any ideas for where I should focus my studies or how high I might set the bar for my work? Thanks a lot everyone!

    Read the article

  • 12.04 Booting into Terminal

    - by user170796
    To preface this, I would like to say that I am completely new to Ubuntu and have essentially zero programming experience/experience working with command line and terminal. I installed Ubuntu because I would like to get into programming. If you could provide me with the simplest instructions possible, I would be grateful. I have a Lenovo Ideapad Y500 (Intel i7, NVidia GT 750m, 1TB HDD, 16GB SSD cache, 8GB RAM) with Windows 8 on it. Using a Live CD, I installed Ubuntu 12.04 onto a 75 GB partition. During the installation, I kept all default settings except for one thing; I decided to encrypt my home folder, and so checked the corresponding box. The installation completed, and I restarted. Once I restarted, I saw the options "Ubuntu, with Linux 3.2.0-23-generic" "Ubuntu, with Linux 3.2.0-23-generic (recovery mode)" "Memory test (memtest86+)" "Memory test (memtest86+, serial console 115200)" "Windows Recovery Environment (loader) (on /dev/sdb3)" "Windows 8 (loader) (on /dev/sdb5)" "System Setup" I chose the first option, and was directed to a screen with the Ubuntu logo and the row of five dots below that change from orange to white. Then, I was brought to a full screen terminal that prompted me to login, which I did. I saw no option to boot into GUI at all, and am lost. I've been searching around and have tried the "startx" command to no avail. Should the command have some sort of context or something? I've also tried selecting the recovery mode option from the boot manager. I've tried the resume option from the following menu, which eventually just shuts down the computer after displaying a lot of scrolling text that's too fast for me to read. I've also tried the failsafex mode from the recovery mode menu, which only brings up a terminal box at the bottom of the window that covers the entire bottom part of the screen. Commands won't work in this window. When I try to access Windows 8, I get a message saying that the EFI file path was not specified or something along those lines. I had to enable Secure Boot in order to access Windows 8 (I had disabled it to be able to boot from the Live CD), which is functioning normally. I am at a complete loss for what to do. Any help will be extremely appreciated. EDIT: Bonus question! If you could figure out a way for me to boot to Windows 8 without having to enable Secure Boot, it would save me a lot of trouble. I can deal with switching every time, but I'd rather not have to.

    Read the article

  • Where can I find comprehensive documentation on the various aspects of Linux?

    - by Fsando
    Whenever a problem pops up in my use of Linux (full time user for 6 years) I start by googling. The simplest (or most common) issues will usually be cleared right away. If it's not in one of those two categories the advise I find tend to be wrong, misguided, obsolete, and if you ask a question here on "ask" you risk being "duplicated" to some superficially similar question. Some issues have haunted me for years until I suddenly hit on the actual developer's documentation (or equivalent) which in a few lines explains how to solve my issue, the correct and consistent way. Whenever that happens I'm always kicking myself: why didn't I just go here to begin with? And the obvious answer is: "I had no idea this was what I was looking for". And for the issues that this hasn't yet happened I'm banging my head: "This ought to be either straight forward or someone tell me it's not doable" So my question is: Are there projects out there trying to collect or list this documentation in a searchable/browseable way. I know there are many very good "if you want this do that" tutorials on Ubuntu but I'm looking for actual documentation. That either are or could be collected in one place (at least conceptually) so that search for information could start in one place. I'm fully aware this is a broad question but if you approach it as: Does gnome have a comprehensive documentation project - where do I find it? Does Ubuntu have a comprehensive documentation project - where do i find it? For example: how exactly does the mime-type association work in Ubuntu and in xubuntu? How exactly are menus created (in Ubuntu: quicklists, xubuntu/gnome: the main menu) How exactly does the rendering process work for compiz/x? (I'm having this issue where windows randomly stops updating until somehow forced to resume (I guess). So for instance where do I look for logs that may indicate the problem. How may I change randr or other settings that may influence this issue. So my point is to organize exact documentation or preferably to find projects that do this already. Thanks! If answers to this question get me started I'm hoping to collect such a list.

    Read the article

  • Get user profile size in vbscript

    - by Cameron
    Hello, I am trying to get the size of a user's local profile using VBScript. I know the directory of the profile (typically "C:\Users\blah"). The following code does not work for most profiles (Permission Denied error 800A0046): Dim folder Dim fso Set fso = WScript.CreateObject("Scripting.FileSystemObject") Set folder = fso.GetFolder("C:\Users\blah") MsgBox folder.Size ' Error occurs here Is there another way to do this? UPDATE: I did some deeper digging and it turns out that the Permission Denied error occurs if permission is denied to some subfolders or files of the directory whose size I wish to get. In the case of user profiles, there's always a few system files that even the Administrator group does not have permission to access. To get around this, I wrote a function that tries to get the folder size the normal way (above), then, if the error occurs, it recurses into the subdirectories of the folder, ignoring folder sizes that are permission denied (but not the rest of the folders). Dim fso Set fso = WScript.CreateObject("Scripting.FileSystemObject") Function getFolderSize(folderName) On Error Resume Next Dim folder Dim subfolder Dim size Dim hasSubfolders size = 0 hasSubfolders = False Set folder = fso.GetFolder(folderName) ' Try the non-recursive way first (potentially faster?) Err.Clear size = folder.Size If Err.Number <> 0 then ' Did not work; do recursive way: For Each subfolder in folder.SubFolders size = size + getFolderSize(subfolder.Path) hasSubfolders = True Next If not hasSubfolders then size = folder.Size End If End If getFolderSize = size Set folder = Nothing ' Just in case End Function

    Read the article

  • The frame buffer layout of the current display cannot be made to match...

    - by adambox
    I get this error when I have VM running in VMWare Workstation on my work PC and try Remote Desktop-ing in from my iMac at home. I just upgraded VMWare Workstation to 7.0 from 6.0 and now I'm getting it when I try to resume my VM at work. It then asks me the scary question of whether I want to preserve or discard the suspended state. I don't want to lose stuff! ack! Update I backed up my VM and tried hitting that "discard" button and the result was a reboot of the VM. I then tried restoring to a snapshot, and none of my snapshots work! Is there anyway to fix this so I can run 7 but still have my old snapshots? The frame buffer layout of the current display cannot be made to match the frame buffer layout stored in the snapshot. The dimensions of the frame buffer in the snapshot are: Max width 3200, Max height 1770, Max size 22659072. The dimensions of the frame buffer on the current display are: Max width 3200, Max height 1600, Max size 20512768. Error encountered while trying to restore the virtual machine state from file "C:\Documents and Settings\adam\My Documents\My Virtual Machines\dev\Windows XP Professional.vmss". What do I do so I don't break things horribly?

    Read the article

  • IIS7 bulk bindings in vbscript , how to remove a binding

    - by minus4
    I have a script to manage adding over a thousand domains to a single site bindings, this has gone fine, but now client wants about 20 of them removed, Microsoft programmers don't think it would be nice to sort the bindings alphabetically, so does anyone know the code to remove the domains ( array list ) in bulk please. this is what i am using: Set adminManager = createObject("Microsoft.ApplicationHost.WritableAdminManager") adminManager.CommitPath = "MACHINE/WEBROOT/APPHOST" Set sitesSection = adminManager.GetAdminSection("system.applicationHost/sites", "MACHINE/WEBROOT/APPHOST") Set sitesCollection = sitesSection.Collection siteElementPos = FindElement(sitesCollection, "site", Array("name", "microsites")) If siteElementPos = -1 Then WScript.Echo "Element not found!" WScript.Quit End If on error resume next Set siteElement = sitesCollection.Item(siteElementPos) Set bindingsCollection = siteElement.ChildElements.Item("bindings").Collection Dim arrFileNames : arrFileNames = Array("list of domains") Dim objDict : Set objDict = CreateObject("Scripting.Dictionary") Dim strFileName, strTemp For Each strFileName In arrFileNames Set bindingElement = bindingsCollection.CreateNewElement("binding") bindingElement.Properties.Item("protocol").Value = "http" bindingElement.Properties.Item("bindingInformation").Value = "192.168.100.19:80:" & strFileName bindingsCollection.AddElement(bindingElement) Next adminManager.CommitChanges() WScript.Echo "Job Completed" WScript.Quit Function FindElement(collection, elementTagName, valuesToMatch) For i = 0 To CInt(collection.Count) - 1 Set element = collection.Item(i) If element.Name = elementTagName Then matches = True For iVal = 0 To UBound(valuesToMatch) Step 2 Set property = element.GetPropertyByName(valuesToMatch(iVal)) value = property.Value If Not IsNull(value) Then value = CStr(value) End If If Not value = CStr(valuesToMatch(iVal + 1)) Then matches = False Exit For End If Next If matches Then Exit For End If End If Next If matches Then FindElement = i Else FindElement = -1 End If End Function so as you can see it is easy to add, but i can find no code or manual or instructions for the removal. i cant seem to run appcmd either. at first i tried creating a batch file using the appcmd but this never worked, saying appcmd can not be found. thanks

    Read the article

  • Benchmarking a file server

    - by Joel Coel
    I'm working on building a new file server... a simple Windows Server box with a few terabytes of disk space to share on the LAN. Pain for current hard drive prices aside :( -- I would like to get some benchmarks for this device under load compared to our old server. The old server was installed in 2005 and had 5 136GB 10K disks in RAID 5. The new server has 8 1TB disks in two RAID 10 volumes (plus a hot spare for each volume), but they're only 7.2K rpm, and of course with a much larger cache size. I'd like to get an idea of the performance expectations of the new server relative to the old. Where do I get started? I'd like to know both raw potential under different kinds of load for each server, as well an idea of what our real-world load looks like and how it will translate. Will disk load even matter, or will performance be more driven by the network connection? I could probably fumble through some disk i/o and wait counters in performance monitor, but I don't really know what to look for, which counters to watch, or for how long and when. FWIW, I'm expecting a nice improvement because of the benefits of having two different volumes and the better RAID 10 performance vs RAID 5, in spite of using slower disks... but I'd like to get an idea of how much.

    Read the article

  • Setting up SQL Server 2005 to use all available memory in 32bit Windows Server 2003 - and verifying

    - by Rizwan Kassim
    There are a number of questions along this line - but they either sometimes contradict each other, or don't show how to properly verify that everything is actually working - hopefully this can be comprehensive... I'm running SQL Server 2005 SP3 Standard on Windows Server 2003 R2 Standard. My server has 8GB of memory installed - my system is almost entirely used as a Database Server - there are some services running on them, but the OS + services can run within 1Gb of RAM. What I've done (please tell me if I'm doing something wrong): /3GB in the boot.ini. (To increase the amount of user-space memory available - info) /PAE in the boot.ini. (Windows claimed to be doing PAE even without this switch, somethow.) Enabled AWE in SQL Server. Enabled Lock Pages in Memory Option for users SYSTEM and Local Service. (info). SQL Server Standard doesn't seem to use this until Cumulative Update 4, which isn't installed on my server. (info) Set Min/Max Memory to : 1024Mb/5112Mb After doing all the above, we definately saw a level of improvement - but I'd like now to verify my settings, make sure that I'm making full use of the memory available. (There appeared to be a slowdown when max = 7Gb, so I edged off from that value, but it might have been just perceptual.) To verify, I checked the following levels in PerfMon : Process(sqlserv):Working Set : 76386304 SQL Server(Memory Manager) : Total Server Memory : 3538944 (I saw a doc that noted that this wasn't the full memory used by SQL Server, so I'm not sure whether to trust it) So -- my questions... Should my max be around 7Gb? If not, what should it be? Why is total server memory at 3.5G, when it's been allocated 5G? What is the proper metric for the amount of memory allocated to SQL Server? The Working Set seems a bit large... Am I possibly missing any steps in the setup? Any recommended resources on starting to tune the caching system now? Thanks

    Read the article

  • Chunking large rsync transfers?

    - by Gabe Martin-Dempesy
    We use rsync to update a mirror of our primary file server to an off-site colocated backup server. One of the issues we currently have is that our file server has 1TB of mostly smaller files (in the 10-100kb range), and when we're transferring this much data, we often end up with the connection being dropped several hours into the transfer. Rsync doesn't have a resume/retry feature that simply reconnects to the server to pickup where it left off -- you need to go through the file comparison process, which ends up being very length with the amount of files we have. The solution that's recommended to get around is to split up your large rsync transfer into a series of smaller transfers. I've figured the best way to do this is by first letter of the top-level directory names, which doesn't give us a perfectly even distribution, but is good enough. I'd like to confirm if my methodology for doing this is sane, or if there's a more simple way to accomplish the goal. To do this, I iterate through A-Z, a-z, 0-9 to pick a one character $prefix. Initially I was thinking of just running rsync -av --delete --delete-excluded --exclude "*.mp3" "src/$prefix*" dest/ (--exclude "*.mp3" is just an example, as we have a more lengthy exclude list for removing things like temporary files) The problem with this is that any top-level directories in dest/ that are no longer present present on src will not get picked up by --delete. To get around this, I'm instead trying the following: rsync \ --filter 'S /$prefix*' \ --filter 'R /$prefix*' \ --filter 'H /*' \ --filter 'P /*' \ -av --delete --delete-excluded --exclude "*.mp3" src/ dest/ I'm using the show and hide over include and exclude, because otherwise the --delete-excluded will delete anything that doesn't match $prefix. Is this the most effective way of splitting the rsync into smaller chunks? Is there a more effective tool, or a flag that I've missed, that might make this more simple?

    Read the article

  • Is there any way to customize the Windows 7 taskbar auto-hide behavior? Delay activation? Timer?

    - by calbar
    I'm becoming increasingly frustrated with the way Windows 7 handles showing a hidden taskbar. It's incredibly over-eager to pop out and obscure what I'm really trying to interact with, requiring me to move the mouse away, wait for it to auto-hide again, then resume what I was doing but more deliberately. After closely examining the behavior, it appears that a hidden taskbar "peeks out" from the edge by 2 or 3 pixels, and slowly moving your mouse into this area activates it; you don't even need to touch the edge of the screen. I would love it if there was a way to customize or change this behavior. Ideally, the taskbar would only pop out if you are actively "pushing" the edge of the screen it is hidden on. So activation only occurs once you've reached the screens edge and continue to move the mouse past a customizable threshold. Alternatively, a simple activation delay would suffice as well. So only if the mouse remains in that 2-3 pixel area (a.k.a. on the taskbar) for greater than a customizable amount of time does it pop out again. This would only be a fraction of a second. Often times the cursor simply "careens" off the edge of the screen while trying to focus on something nearby. Anyway, if there are any registry settings or utilities that can achieve either of these effects, that would be great! Thanks for your help.

    Read the article

  • Create new folder for new sender name and move message into new folder

    - by Dave Jarvis
    Background I'd like to have Outlook 2010 automatically move e-mails into folders designated by the person's name. For example: Click Rules Click Manage Rules & Alerts Click New Rule Select "Move messages from someone to a folder" Click Next The following dialog is shown: Problem The next part usually looks as follows: Click people or public group Select the desired person Click specified Select the desired folder Question How would you automate those problematic manual tasks? Here's the logic for the new rule I'd like to create: Receive a new message. Extract the name of the sender. If it does not exist, create a new folder under Inbox Move the new message into the folder assigned to that person's name I think this will require a VBA macro. Related Links http://www.experts-exchange.com/Software/Office_Productivity/Groupware/Outlook/A_420-Extending-Outlook-Rules-via-Scripting.html http://msdn.microsoft.com/en-us/library/office/ee814735.aspx http://msdn.microsoft.com/en-us/library/office/ee814736.aspx http://stackoverflow.com/questions/11263483/how-do-i-trigger-a-macro-to-run-after-a-new-mail-is-received-in-outlook http://en.kioskea.net/faq/6174-outlook-a-macro-to-create-folders http://blogs.iis.net/robert_mcmurray/archive/2010/02/25/outlook-macros-part-1-moving-emails-into-personal-folders.aspx Update #1 The code might resemble something like: Public WithEvents myOlApp As Outlook.Application Sub Initialize_handler() Set myOlApp = CreateObject("Outlook.Application") End Sub Private Sub myOlApp_NewMail() Dim myInbox As Outlook.MAPIFolder Dim myItem As Outlook.MailItem Set myInbox = myOlApp.GetNamespace("MAPI").GetDefaultFolder(olFolderInbox) Set mySenderName = myItem.SenderName On Error GoTo ErrorHandler Set myDestinationFolder = myInbox.Folders.Add(mySenderName, olFolderInbox) Set myItems = myInbox.Items Set myItem = myItems.Find("[SenderName] = " & mySenderName) myItem.Move myDestinationFolder ErrorHandler: Resume Next End Sub Update #2 Split the code as follows: Sent a test message and nothing happened. The instructions for actually triggering a message when a new message arrives are a little light on details (for example, no mention is made regarding ThisOutlookSession and how to use it). Thank you.

    Read the article

  • How to rescue from an SD (SDHC) card that I can't reformat (possible hardware failure)

    - by sbwoodside
    I have a transcend 16GB SDHC card and a lot of photos on it that I'd like to recover. When I plug it into the SD card reader, it takes a while for the Mac to even recognize that there's a disk present, and it shows up as 1.07GB with geometry 520/64/63 (according to fdisk). First I tried file recovery: PhotoRec: no files are found (the images are in CR2 format and I'm using testdisk-6.14-WIP which claims to recognize that format under TIF) dd / ddrescue: they create a 1.07GB image, same problem as above TestDisk: doesn't find any partitions to recover I found a source saying that the correct geometry for this type of SD Card is Heads 255, Sectors/Track 63, Cylinders 1953, so I tried manually setting that geometry in PhotoRec/TestDisk. No improvement. Next I tried formatting the disk with fdisk. After writing and quitting, I ran fdisk again and it reported that the new format hadn't been saved on the disk. I also tried resetting the format/partitions with TestDisk and that failed also. The fdisk log is below. I don't really care about the card, I've already ordered a new SanDisk card. But I'd like to get the data off. Maybe, is there any way to force dd or some other tool to create an image of the disk based on the original geometry and not on what the card "thinks" its geometry is? Or am I missing something?

    Read the article

  • Increasing SQL Server / Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/SQL Server VM all the RAM I can (12GB) & SQL Server RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL Server VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • Increasing MSSQL/Sage performance with SSD? (Dell PE T410)

    - by Anthony
    I have a client wanting better performance of their Sage (Accpac & CRM) server (v5.5, soon to be v7). It's running on 1 of 2 Hyper-V VMs (Svr2008) on a Dell PE T410 server with 24GB of RAM (1333MHz) & dual quad-core, and both VMs (only their C: drives) are on a single RAID5 array. All clients connect via 1Gb ethernet. The 2nd VM is SBS2008 with 9GB RAM (& all SBS dbs & company data are on a separate RAID5 array), & 3GB RAM for the Svr2008 hypervisor. I've given the Sage/MSSQL VM all the RAM I can (12GB) & SQL RAM caching (~8GB, never exceeds ~7.5GB, eg. entire db can now be cached in RAM) and that's helped significantly. Upgrading the Hypervisor to Svr2012 is an obvious step, but probably not a dramatic improvement? What about an SSD for this Sage/SQL VM (VM = 100GB, <10GB for the actual live DB) ? Can SSDs be put into the SAS hot-swap bays? Or will I have to use the mobo SATA(3Gbps?) ports, or PCI-E SSD card? Should SSDs be RAIDed for this situation? Or is SSD's higher reliability offsetting the need for RAID1/5/10? (I have nightly full disk backups) New territory for me, would appreciate some feedback. Thanks, Anthony.

    Read the article

  • Windows 8 auto-hibernate from sleep not working on Retina MacBook Pro

    - by frenchglen
    I have a similar question to this one. Only my context is the 15" Retina MacBook Pro - and Windows 8. I have just the original Mac OS X Mountain Lion on there, then Windows 8 via Bootcamp. no rEFIt installed. (I just press ALT every time I restart windows, actually as a security measure to stop tech-unsavvy thugs, who, if the laptop is stolen, think it's only a mac and don't discover my Windows as quickly as they would've, and by that time I remotely activate various anti-theft mac apps and nab them that way). SO: like the related question asks, why isn't it behaving like it should? The Windows 7 FAQ states: Will sleep eventually drain my laptop battery? If your laptop battery charge gets critically low while the computer is asleep, Windows automatically puts the laptop into hibernation mode. But this is just not happening - on my rMBP Windows 8. It seems EVERY time I set the laptop to sleep (when it reaches 10%), then arriving home and plugging it in and hoping to simply resume my work, it does NOT save the session to disk and I lose ALL my work. Who's fault is it? Win 8's (a bug, grr)? Or Apple's EFI system (maybe fixable via editing EFI options/do I have to install refit to make it work perhaps?) Or maybe changing windows power options can somehow fix the problem? Thanks for your help.

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • Bash mine script, please

    - by HomelyPoet
    The script, in and of its self, is fairly self-explanatory. Use if You so desire; any and all criticism wouldst be appreciated, as wouldst any suggestions for improvement. First iteration was writ upon OS X 10.5.8 Leopard, current iteration was run upon OS X 10.6.4 Snow Leopard with Safari 5.0.2 (6533.18.5). Also, any illumination as to why the first line ' if [ -f ] ' works, but ' if [ -f ~/Library/Safari/LocalStorage/*.localstorage ] ' generates an error? [yes, I am a bit of a Noob] Code: #! /bin/bash # SafariClear0.0.6 if [ -f ] then cat /dev/null > ~/Library/Safari/LocalStorage/*.localstorage rm -f ~/Library/Safari/LocalStorage/*.localstorage fi if [ -f ~/Library/Safari/LocalStorage/*.localstorage ] then echo "Oy vey!" fi cd ~/Library/Safari/ cat /dev/null > WebpageIcons.db cat /dev/null > TopSites.plist cat /dev/null > LocationPermissions.plist cat /dev/null > LastSession.plist cat /dev/null > History.plist echo "Clear" exit

    Read the article

  • Logging hurts MySQL performance - but, why?

    - by jimbo
    I'm quite surprised that I can't see an answer to this anywhere on the site already, nor in the MySQL documentation (section 5.2 seems to have logging otherwise well covered!) If I enable binlogs, I see a small performance hit (subjectively), which is to be expected with a little extra IO -- but when I enable a general query log, I see an enormous performance hit (double the time to run queries, or worse), way in excess of what I see with binlogs. Of course I'm now logging every SELECT as well as every UPDATE/INSERT, but, other daemons record their every request (Apache, Exim) without grinding to a halt. Am I just seeing the effects of being close to a performance "tipping point" when it comes to IO, or is there something fundamentally difficult about logging queries that causes this to happen? I'd love to be able to log all queries to make development easier, but I can't justify the kind of hardware it feels like we'd need to get performance back up with general query logging on. I do, of course, log slow queries, and there's negligible improvement in general usage if I disable this. (All of this is on Ubuntu 10.04 LTS, MySQLd 5.1.49, but research suggests this is a fairly universal issue)

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • How to prevent blocking http auth popups on firefox restart with many tabs open

    - by Glen S. Dalton
    I am using the latest firefox with tab mix plus and tabgoups manager. I have maybe 50 or 100 tabs oben in different tab groups. When I shutdown firefox and start it again all tabs and tab groups are perfectly rebuilt. But I have also many pages open that are behind a standard http auth, and these pages all request their usernames and passwords. So during startup firefox pops up all these pages' http auth windows. And they block everything else in firefox, they are like modal windows. (I am involved in website development and the beta versions are behind apache http auth.) I have to click many times the OK button in the popups, before I can do anything. All the usernames and passwords are already filled in. (And the firefox taskbar entry blinks and the firefox window heading also blinks, and focus switches back and foth, which also annoys me. And sometimes the popups do not react to my clicks, because firefox is maybe just switching focus somewhere else. This is the worst.) I want a plugin or some way to skip those popups. There are some plugins I tried some time ago, but they did not do what I need, because they require a mouse click for each login, which is no improvement over the situation like it already is. This is not about password storage (because firefox already stores them). But of course, if some password storing plugin could heal this it would be great.

    Read the article

  • Mac OS X printing to CUPS - More intuitive authentication failure?

    - by Moduspwnens
    We have a network-wide CUPS server that offers authenticated printer access to all our campus users. We've been pretty disappointed with the way Mac clients handle bad printing authentication, though. In any other authentication dialog, when a user types in a bad username or password, the window shakes briefly, allowing the user to re-enter. With printers, this isn't the case. It'll happily accept (and even save to the keychain, if specified) bad credentials. The authentication dialog is dismissed, and the user then has to deal with the print jobs showing up as "On hold (authentication required)". To get their job printed, they need to select it in the printer's queue, click "Resume", then re-enter appropriate credentials. Is there a way to get failed printing authentication to work more intuitively for Mac OS X clients? We're trying to support a BYOD environment, but our end users have been really confused by this. It's made even worse by the way it pre-populates the user's full login name (e.g. "Smith, John"), which tends to make them think to use their local machine passwords.

    Read the article

  • How to "swap in" again memory from page file to physical memory in Windows at once (like linux swap-off)

    - by Arnout
    Is there a way to swap back in (to put back all the memory data that was put into the page file (or swap, whatever you prefer)) memory on a windows PC? On linux, one can easily do this with the swapoff /dev/sdaX, where X is the swap partition. On windows, it seems to ask me to reboot each time.. The reason I'd like to do this, is that, even though swapping out the data to the swap file allows me to play a resource-hungry game fully in physical ram, when I stop the game, all the rest of my programs run slow. This is or course normal; all the programs were pushed into the page file because my RAM was too small, and all memory access to those programs after gaming bumps into hard page faults, with major delays and some frustration as a consequence. However, that frustration could easily be avoided, by simply allowing the PC to copy all data back into the physical memory for a minute or so, and then resume working on a fast working PC! (rather than having to endure the slowness -while- working) Thanks in advance for any advice on this! Kind regards

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >