Search Results

Search found 25660 results on 1027 pages for 'booting issue'.

Page 198/1027 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Can you boot an Acer Aspire One from an SD card when no BIOS is available?

    - by henrijs
    Is it possible to boot the Acer Aspire One PC from an SD card? I have bricked an Aspire One, but it does not even start the BIOS. Aspire One have this issue and a BIOS update usually work and it helped me once in the past, but this time it's all over, and the BIOS update fails. It still reads the SD card with the magic Ctrl + Esc shortcut used to launch the BIOS update. Can I trick the computer into booting somehow using this shortcut?

    Read the article

  • Running Spinrite from USB drive?

    - by Snackmoore
    Hi Everyone, I need to use spinrite on my notebook which has no cd-rom. Can one tell me how I could install and run spinrite from a USB thumbdrive? such that I could boot the notebook up with a thumbdrive and start spinrite. Are all USB thumbdrive capable of booting? I don't even know how to make them boot. Thank you very much in advance. Best Regards.

    Read the article

  • .ico icons not showing up on Windows

    - by Ali
    I followed the The Qt Resource System guide and the .ico icons appear on Linux. The icons are not showing up on Windows when I try to run the applicaton from Qt Creator. I suspect a plugin issue based on Qt/C++: Icons not showing up when program is run under windows O.S but I failed to figure out what to do from the guide How to Create Qt Plugins. Is it a plugin issue or why aren't the icons showing up on Windows? If it is a plugin issue: How do I tell my applicaton where to find the qico.dll? Details of the environment: Works on: Kubuntu 12.04 LTS, Qt Creator 2.4.1 and Qt 4.7.4 (64 bit) Fails on: Windows XP SP2 32 bit, Qt Creator 2.4.1 and Qt 4.7.4 (32 bit) Everyting is at its default (as installed out of the box), I did not mess with the settings. resources.qrc <!DOCTYPE RCC><RCC version="1.0"> <qresource> <file>images/spreadsheet.ico</file> </qresource> </RCC> Also tried with <qresource prefix="/">. From the applicaton.pro RESOURCES += \ resources.qrc OTHER_FILES += \ images/spreadsheet.ico In the corresponding source file QIcon(":/images/spreadsheet.ico") I repeat: it works on Linux.

    Read the article

  • KVM-Guest does not boot: qemudParsePCIDeviceStrs

    - by markus
    I have a Server running Ubuntu 10.10 Server-Edition kvm, and libvirt (both ubuntu-native packages) HDD-Partitioning was done with LVM. Then I created some VMs with Virt-Manager and assigned LVM-Volumes to the VMs. Now the VMs do not boot. Virt-Manager shows a CPU-Usage of 100% for this Guest and the VNC-Connection states Booting from Hard Disk The VM-specific logfiles do not show any abnormality only syslog shows a warning warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed What can I do to find the error?

    Read the article

  • ubuntu usb that can be booted on mac and windows?

    - by An Original Alias
    I am a little frustrated having to use to separate USBs for booting on mac and windows. Is there a way that I can have the version for mac and the version for windows on one USB? or even better, one version that can be booted on both? I have heard of multibootiso, but I'm not entirely sure that it will work on imac. If needed, I am willing to use terminal to make this happen, even if its a long complicated process.

    Read the article

  • SQL Server job (stored proc) trace

    - by Jit
    Hi Friends, I need your suggestion on tracing the issue. We are running data load jobs at early morning and loading the data from Excel file into SQL Server 2005 db. When job runs on production server, many times it takes 2 to 3 hours to complete the tasks. We could drill down to one job step which is taking 99% of the total time to finish. While running the job step (stored procs) on staging environment (with the same production database restored) takes 9 to 10 minutes, the same takes hours on production server when it run at early morning as part of job. The production server always stuck up at the very job step. I would like to run trace on the very job step (around 10 stored procs run for each user in while loop within the job step) and collect the info to figure out the issue. What are the ways available in SQL Server 2005 to achieve the same? I want to run the trace only for these SPs and not for certain period time period on production server, as trace give lots of information and it becomes very difficult for me (as not being DBA) to analyze that much of trace information and figure out the issue. So I want to collect info about specific SPs only. Let me know what you suggest. Appreciate your time and help. Thanks.

    Read the article

  • C/C++ I18N mbstowcs question

    - by bogertron
    I am working on internationalizing the input for a C/C++ application. I have currently hit an issue with converting from a multi-byte string to wide character string. The code needs to be cross platform compatible, so I am using mbstowcs and wcstombs as much as possible. I am currently working on a WIN32 machine and I have set the locale to a non-english locale (Japanese). When I attempt to convert a multibyte character string, I seem to be having some conversion issues. Here is an example of the code: int main(int argc, char** argv) { wchar_t *wcsVal = NULL; char *mbsVal = NULL; /* Get the current code page, in my case 932, runs only on windows */ TCHAR szCodePage[10]; int cch= GetLocaleInfo( GetSystemDefaultLCID(), LOCALE_IDEFAULTANSICODEPAGE, szCodePage, sizeof(szCodePage)); /* verify locale is set */ if (setlocale(LC_CTYPE, "") == 0) { fprintf(stderr, "Failed to set locale\n"); return 1; } mbsVal = argv[1]; /* validate multibyte string and convert to wide character */ int size = mbstowcs(NULL, mbsVal, 0); if (size == -1) { printf("Invalid multibyte\n"); return 1; } wcsVal = (wchar_t*) malloc(sizeof(wchar_t) * (size + 1)); if (wcsVal == NULL) { printf("memory issue \n"); return 1; } mbstowcs(wcsVal, szVal, size + 1); wprintf(L"%ls \n", wcsVal); return 0; } At the end of execution, the wide character string does not contain the converted data. I believe that there is an issue with the code page settings, because when i use MultiByteToWideChar and have the current code page sent in EX: MultiByteToWideChar( CP_ACP, 0, mbsVal, -1, wcsVal, size + 1 ); in place of the mbstowcs calls, the conversion succeeds. My question is, how do I use the generic mbstowcs call instead of teh MuliByteToWideChar call?

    Read the article

  • Windows server detected error with hard disk

    - by user53864
    We have hosting Windows server 2008 R2 and I am working as admin in small company. The server is hanging and restarting as the hard disk seems to be damaged due to power fluctutaion(though having inverter) as it's showing the below error message on server reboot: Problem detected with the hard disk Press any key to continue It's Seagate 1TB SATA hard disk and it's booting after pressing enter. So it's clear that the hard disk is dying. Yes, it's in warranty but it's fact that warranty won't recover the lincesed windows server 2008 and it's data. As it's booting now, I backed up required things and I am thinking to clone the entire hard disk. The first thing it striked me is checking on the Seagate site if any tool available for cloning and I found Seagate DiskWizard but not specified it for windows server 2008. Please anybody could help me giving your best ideas for the below: Urgently, What's the best way(free of cost) for me to clone in my case with the new same sized hard disk? It's a one time lincenced and I cannot use the same key again if I reinstall the server. Will the lincense be carried with new disk if cloned? else there is a way to contact Microsoft explaining the problem occurred, to obtain new key for no charge?. I want to take measure for future. How do I keep two disks in continuous sync? mirrored & raid are the only options converting the disks to dynamic? or is there a best way I could do with no additional charge?. Any help is greatly appreciated. Thank you! EDIT:1 I started cloning the disk with CloneZilla and it was going proper showing in GUI. But after some time there is no GUI but a black screen with some codes(looks like disk location numbers) going page by page(I have attached the screenshots below captured from my phone). Do you people think it's actually cloning?. I started in the morning and it's evening now. I left the office now to let it finish what it's trying to do and I'll go & check it tomorrow. Slowly lost hope, don't know what face it's going to show tomorrow. Any ideas?

    Read the article

  • Margically appearing margin in footer. Could use some fresh eye's on this.

    - by Banderdash
    Hi chaps! Got a nearly finished coded comp here: http://clients.pixelbleed.net/biodesign/ I realize it's not completely valid, what with my nesting of li's and ul's within A tags. Unfortunately that's the way it has to be. My issue is with the very bottom of the footer. It has a space showing the body's background color beneath it. Tried a number of things and this space after the black link bar on the very bottom is a resistant little bugger. Ideally the black box on the bottom should rest against the very base of the view-port, at least when the content is sufficiently long--which it is in this case. If someone would like to take a quick peak at my source and give me some ideas I'd be very grateful. The CSS can be found here: http://clients.pixelbleed.net/biodesign/css/core.css And, yes, I've tried removing the height:100%. Makes no difference it seems. I do believe the issue to be with the footer, as when I remove these Div's the content rests as it should. Just don't see anything in my CSS to cause the margin/spacing issue though. Thanks so much.

    Read the article

  • Mac/Linux Dual Boot

    - by user38008
    I trying to create a dual boot of linux and mac without bootcamp. But I'm nervous that I'll screw up or lose my data. In disk utilitys I made a 45gig partion called linux but I dont know how to format it and if it matters at all.... Also, when the partition is done. I press cntrl when booting up select that Linux partition and put in the livebootUSB or CD right?

    Read the article

  • Getting a gestureoverlayview

    - by Codejoy
    I have been using some nice tutorials on drawing graphics on my android. I wanted to also add in the cool gesture demo found here: http://developer.android.com/resources/articles/gestures.html That takes these lines of code: GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures); gestures.addOnGesturePerformedListener(this); This is fine and dandy yet I realize in my demo i'm trying to build using code from "Playing with Graphics in Android". The demos make sense, everything makes sense but I found out by using: setContentView(new Panel(this)); as is required by the Playing With Graphics tutorials, then the findViewById seems to no longer be valid and returns null. At first I was about to post a stupider question as to why this is happening, a quick test of playing with the setContentView made me realize the cause of findViewById returning null, I just do not know how to remedy this issue. Whats the key I am missing here? I realize that the new Panel is doinking some reference up but I am not sure how to make the connection here. The: R.id.gestures is defined right int he main.xml as: (just like the tutorial) <android.gesture.GestureOverlayView android:id="@+id/gestures" android:layout_width="fill_parent" android:layout_height="0dip" android:layout_weight="1.0" /> So I did confirm the setContentView(new Panel(this)) is causing the issue. So I know the issue is that I have to figure out how to add the android.gesture.GestureOverlayView to the panel class somehow, I am just not sure how to go about this. After fighting with this I generally know what I need to do just now how to do it. I think I need either the equivalent of creating a panel in that main.xml OR figuring out how to build whats in main.xml for the gestures in code. I am close because I did this: GestureOverlayView gestures = new GestureOverlayView(this); which gets me a non null gestures now, unfortunately since I am not telling it to fill Parent anywhere I don't think its really showing up, so I am trying hard to figure out layout pa rams. Am I even on the right track?

    Read the article

  • Copy an existing OS X install from another drive

    - by Kevin Burke
    I just bought a new solid state drive, and I'd like to copy all of the files and setup from my current Mac OS X hard drive onto it. What is the best way to do this? I have a 1TB external hard drive, my drive backed up on Time Machine, and Snow Leopard on a DMG, but no external mount for the SSD, and no install DVD (it's at my parents house, promise). I'm familiar with the command line and booting up Mac OS X from a hard drive.

    Read the article

  • Calling NSFetchedResultsController & CoreData experts

    - by JK
    I am having a few nagging issues with NSFetchedResultsController and CoreData, any of which I would be very grateful to get help on. Issue 1 - Updates: I update my store on a background thread which results in certain rows being delete, inserted or updated. The changes are merged into the context on the main thread using the "mergeChangesFromContextDidSaveNotification:" method. Inserts and deletes are updated properly, but updates are not (e.g. the cell label is not updated with the change) although I have confirmed the updates to come through the contextDidSaveNotifcation, exactly like the inserts and deleted. My current workaround is to temporarily change the staleness interval of the context to 0, but this does not seem like the ideal solution. Issue 2 - Deleting objects: My fetch batch size is 20. If an object is deleted by the background thread which is in the first 20 rows, everything works fine. But if the object is after the first 20 rows and the table is scrolled down, a "CoreData could not fulfill a fault" error is raised. I have tried resaving the context and reperforming the frc fetch - all to no avail. Note: In this scenario, the frc delegate method "didChangeObject...." is not called for the delete - I assume this is because the object in question had not been faulted at that time (as it is was outside the initial fetch range). But for some reason, the context still thinks the object is around, although is has been deleted from the store. Issue 3 - Deleting sections : When the deletion of a row leads to the deletion of a section, I have gotten the "invalid number of rows in section???" error. I have worked around this by removing the "reloadSection" line from the NSFetchedResultsChangeMove: section and replacing it with "[tableView insertRowsAtIndexPaths...." This seems to work, but once again, I am not sure if this is the best solution. Any help would be greatly appreciated. Thank you!

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • Is auto-logon on laptop with encrypted hard drive secure?

    - by Tobias Diez
    I have the complete hdd of my laptop encrypted (with the Windows built-in Bitlocker) and thus have to login two times upon booting (Bitlocker and user account). Since I'm the only person using the computer (and knowing the Bitlocker password), I was thinking about automatically login into the user account to make the boot process smoother and quicker. In which cases/scenarios is this a bad idea and the additional login gives a true additionally layer of security?

    Read the article

  • How to clone a USB flash drive using dd?

    - by MentalBlister
    Using 'dd' to clone a USB drive -cfdisk: resized the destination partition to be of same size made the partition bootable same 'type' ext3 ran 'mkfs.ext3' after exit cfdisk then dd if=dev/sda1 of=/dev/sdb1 result booting: Missing operating system. The source USB device boots on multiple laptops USB destination filesystem looks the same.... Any idears?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • Trouble with installing Ubuntu 14.04 alongside Windows 8.1

    - by user3121138
    I work with Windows 8.1 and today I installed Ubuntu 14.04 alongside it, but now I can't set the boot menu to display both OS. When I boot the system it normally loads Windows 8.1 without opening the boot menu. I created a boot USB with Yumi. When booting with Yumi and selecting "boot from hard disk", Ubuntu turned on. My drives: My laptop is a Fujitsu Lifebook AH532/G52 and the BIOS is a Phoenix v1.10

    Read the article

  • Tracking down slow managed DLL loading

    - by Alex K
    I am faced with the following issue and at this point I feel like I'm severely lacking some sort of tool, I just don't know what that tool is, or what exactly it should be doing. Here is the setup: I have a 3rd party DLL that has to be registered in GAC. This all works fine and good on pretty much every machine our software was deployed on before. But now we got 2 machines, seemingly identical to the ones we know work (they are cloned from the same image and stuffed with the same hardware, so pretty much the only difference is software settings, over which I went over and over, and they seem fine). Now the problem, the DLL in GAC takes a very long time to load. At least I believe this is the issue, what I can say definitively is that instantiating a single class from that DLL is the slow part. Once it is loaded, thing fly as they always have. But while on known-good machines the DLL loads so fast that a timestamp in the log doesn't even change, on these 2 machines it take over 1min to load. Knowns: I have no access to the source, so I can't debug through the DLL. Our app is the only one that uses it (so shouldn't be simultaneous access issues). There is only one version of this DLL in existance, so it shouldn't be a matter of version conflict. The GAC reference is being used (if I uninstall the DLL from GAC, an exception will be thrown about the missing GAC reference). Could someone with a greater skill in debug-fu suggest what I can do to track down the root cause of this issue?

    Read the article

  • Eclipse cannot find existing project in build path

    - by PNS
    Here is probably one of the idiosyncrasies of Eclipse and its handling of build paths, which cannot be fixed despite all sorts of workarounds tested so far. The issue relates to a workspace of several projects, each of which compiles into its own JAR. Dependencies among the projects are resolved by adding the relevant ones to the build path (no Maven or other external tool or plugin is used), via Project -> Properties -> Java Build Path -> Projects Among all these projects, a couple (say, com.example.p1 and com.example.p2) refuse to recognize a third (and simple) one (say, com.example.p3), while all other projects do. So, although P3 is added to the build path, all related classes from P3 are imported properly and the source code of each such class is accessible by hitting F3, Eclipse keeps complaining that The import com.example.p3 cannot be resolved and SomeClass cannot be resolved to a type where com.example.p3.SomeClass is one of the P3 classes. If instead of the P3 project I put its compiled JAR in the build path, the issue disappears. However, code in P3 changes frequently and it is a time waste to keep compiling and refreshing the workspace so that the change is picked up, not to mention that this should not happen in an IDE anyway (and it does not for the other projects using P3). Among the workarounds tried are things like: Removing and adding again P1, P2, P3 Cleaning up and recompiling everything Checking whether any other project loads the P3 JAR Putting P3 at the top of the Eclipse build path "Order and Export" list Using the "Fix project setup" suggestion of Eclipse (available when hovering the mouse over the red-underlined-error compilation line). Actually, this option offers adding to the build path either P3 or its JAR, but if P3 is added, the issue reappears. Any ideas?

    Read the article

  • SQL Management Studio external database only

    - by Robuust
    I'm trying to speed up my PC, and I figured out that a full version of SQL Management Studio 2012 is installed including localhost server. I only need to connect to remote hosts, so running a local server by default should be disabled. Is there an easy way to disable certain parts so I can speed up my PC and booting time? Thanks in advance. I really have no clue what processes I can disable without ruining everything.

    Read the article

  • In Scrum, should a team remove points from (defect) stories that don't result in a code change?

    - by CanIgtAW00tW00t
    My work uses a Scrum-like process to manage projects. I say Scrum-like, because we call it Scrum, but our project managers exclude aspects of Scrum that are inconvenient (most notably customer interaction). One of the stories in our current sprint was to correct a defect. After spending almost an entire day working on the issue, I determined the issue was the result of a permissions issue, so I didn't end up modifying any code. Our Scrum master / project manager decided that no code change equals zero points. I know that Scrum points are supposed to measure size / complexity and not time, but our Scrum master invests a lot of time in preparing graphs and statistical information from past sprints (average velocity, average points completed, etc.) I've always been of the opinion that for statistics to be meaningful in any way, the data must be as accurate as possible. All of our data is fuzzy to begin with, because, from time to time, we're encouraged by the Scrum master to "adjust" our size / complexity estimates, both increasing and decreasing them. I'd like to hear some other developers / Scrum team members thoughts on the merits of statistics based on past sprints, and also whether they think it's appropriate to "adjust" size / complexity estimates in the middle of a sprint, or the remove all points from a story all together for situations similar to what I've just described.

    Read the article

  • Can you do this with Hudson?

    - by damian
    I want to create a hudson job, that takes an id as a parameter. And use that id to calculate the svn-repo path. Where I work you have a svn path for every issue that you resolve. And then all the issues are joined into a single svn-path. What I want to do is to run static code analysis on the partial issues. So I think maybe having an Ant build.xml that I use for every issue, then, parametrize the job with the issue id. I have tried to achieve that but the svn path doesn't replace the parameter. I have tried with #issueId, %issueId%, ${issueId} and ${env.issueId} without success. Jump error like: Location 'http://svn-path:8181/svn/devSet/issues/${env.chuid}' does not exist Checking out a fresh workspace because C:\Documents and Settings\dnoseda\.hudson\jobs\test\workspace\${env.chuid} doesn't exist Checking out http://svn-path:8181/svn/devSet/issues/${env.chuid} ERROR: Failed to check out http://svn-path:8181/svn/devSet/issues/${env.chuid} org.tmatesoft.svn.core.SVNException: svn: '/svn/!svn/bc/46190/devSet/issues/$%7Benv.chuid%7D' path not found: 404 Not Found (http://svn-path:8181) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51) at I am think that I can not do what I want. Do you know how I can setup the correct configuration to achieve this matter? Thanks for any help. Edit The section of the configurate job that I want to put this parameter is this: <scm class="hudson.scm.SubversionSCM"> <locations> <hudson.scm.SubversionSCM_-ModuleLocation> <remote>http://svn-path:8181/svn/devSet/issues/${env.issueid}</remote> </hudson.scm.SubversionSCM_-ModuleLocation> </locations>

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >