Search Results

Search found 295 results on 12 pages for 'velvet ghost'.

Page 5/12 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Online Code Editor [closed]

    - by Velvet Ghost
    The major online IDEs are hosted on the service provider's server. Examples are Kodingen, Cloud9, ShiftEdit. Hence they would be unavailable if the external server was down for some reason, and I prefer to do my computing on my own computer anyway. Does anyone know of an online IDE or editor (preferably just an editor - a simple implementation of the Ace or CodeMirror JS editors) which can be downloaded and run on localhost (on a local LAMP server)? I've found two so far - Eclipse Orion and Wiode, but I don't like either of them very much, and I'm looking for alternatives. Also suitable are browser extensions which run natively on the browser (offline) without going to some external site. An example would be SourceKit for Chrom(e/ium).

    Read the article

  • Windows 2003 Dynamic Disk error

    - by ChrisH
    Hi, I was trying to ghost a partition on a Windows 2003 server, using Ghost 2003. Unfortunately things went horribly wrong, and now I can't boot back into my system. As you can see, Ghost creates a wee little partition to do its dirty work, and has dislodged my other partitions. Partition 2 in the image below is my C drive. Any suggestions as to how I might get this active again so that it boots? Cheers, Chris

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3)

    - by hinkmond
    So, let's now connect the parts together to make a Java Embedded ghost sensor using a Raspberry Pi. Grab your JFET transistor, LED light, wires, and breadboard and follow the connections on this diagram. The JFET transistor plugs into the breadboard with the flat part facing left. Then, plug in a wire to the same breadboard hole row as the top JFET lead (green in the diagram) and keep it unconnected to act as an antenna. Then, connect a wire (red) from the middle lead of the JFET transistor to Pin 1 on your RPi GPIO header. And, connect another wire (blue) from the lower lead of the JFET transistor to Pin 25 on your RPi GPIO header, then connect another (blue) wire from the lower lead of the JFET transistor to the long end of a common cathode LED, and finally connect the short end of the LED with a wire (black) to Pin 6 (ground) of the RPi GPIO header. That's it. Easy. Now test it. See: Ghost Sensor Testing Here's a video of me testing the Ghost Sensor circuit on my Raspberry Pi. We'll cover the Java SE app needed to record the ghost analytics in the next post. Hinkmond

    Read the article

  • How can I rescue a Lubuntu install?

    - by Ghost
    Quick recap: I was having a problem with hibernation so I check and the linuxswap partition is missing, showing an "unknown" chunk of drive where it was. Happened before, booted to the liveCD and used Gparted to reformat that partition back to swap. Then I boot........F---- grub rescue... MBR took care of the problem, except that now I'm back to Windows only. EVERY guide out there makes me reinstall Lubuntu from scratch, a waste of time considering it will take me at least a day to reinstall everything there. Can't I just fix grub like I did with the win MBR?

    Read the article

  • Installing 12.10 messed up Xorg

    - by Ghost
    So I finally decided to upgrade from 12.04 and now my ubuntu install has been rendered unusable. I'm getting the "no screens detected" error from xorg in the console, and no mattr how many times I reinstall it xorg keeps throwing the same errors. I already had to do a bunch of hacks like nomodeset just to get to the console, else I would get an infinite tabulator after the purple boot screen. The machine has a 4200HD IGP, I even tried installing fglrx legacy (4xxx series and below are not supported in regular fglrx anymore) which was a subpar driver in 12.04 but in hopes that the machine would at least work, but nope, nothing at all. Anyone had the same problem? how do I fix it? EDIT: just wanted to add that I upgraded to 13.04 and it didn't solve anything, in fact it might have broken things even further.

    Read the article

  • Really weird situation with Swap that refuses to work

    - by Ghost
    So I decide to move my Swap partition to another HDD to make hibernation more fast, I create a partition with a liveUSB, do swapon, and then hibernate just disappears I go into the terminal, reactivate hibernate (so the button shows) for some reason the swap wasn't "on" so I do swapon again. Still not working. I go back to the terminal and edit the file with the UUID of the new swap partition. The partition shows as swap on gparted, it's on, the file has the right UUID, even the task manager shows a swap area of 10GB, and there is a hibernate button on the shutdown window. But it-wont-hibernate! What's going on?

    Read the article

  • How can I format my active hard drive to NTFS?

    - by Ghost
    Believe it or not, I'm not too happy with Ubuntu. Well, let me rephrase that. I like it, but the only thing I don't like about it is that it's too much of a hassle to get a game to work. I'm trying to install Windows 7 with a 4GB flash drive, but my error that comes up is that my hard drive I'm trying to install on is in ext4. I need to format it to read NTFS. I can't seem to find any topics on how to format an active hard drive. I found a topic that explains how to move Ubuntu to a new drive, but it's a bit confusing to me. Please help! (Please don't disregard this topic just because I want to go back to windows)

    Read the article

  • Massive crash and kernel panic after updates yesterday, what now?

    - by Ghost
    Got 12.04 on an AMD+radeon machine, everything was running fine until last night when the update manager installed new packages, and now the moment I open a browser (firefox, chromium, anything) it crashes instantly. It says warning: at build/buildd/linux-3.5.0/arch/x86/kernel/smp.c:123 native_smp_send_reschedule+0x5b/0x60() Pid: 2432, comm: chrome_dbthread tainted: G D 3.5.0-2generic #2-ubuntu I already tried uninstalling the packages from the update log, didn't solve a thing. Depending of which browser I use I get a different reaction, like a complete freeze instead of a text console crash. Ideas? I really don't know what to do.

    Read the article

  • Which is the best low/mid level GPU in terms of compatibility for 12.04 and 12.10?

    - by Ghost
    I'm tired of dealing with the disaster that is catalyst on a 4xxx IGP and I want to buy some discrete graphics that support both Unity and alternative environments like GNOME3 and Mate-Cinnamon without getting sluggish. Besides that, 1080p video and some light 3D gaming. Again nothing crazy, I'm not a designer and I wont be using stuff like Blender. One thing I want to make clear is that I want a GPU that is 100% compatible, that means no bugs, no glitches, not having to toggle between propietary and Xorg because it decided to crapout on me. Thanks for your help

    Read the article

  • Adding Apache common dependency to Play Framework 2.0

    - by Mooh
    i want to import org.apache.commons.io but i'm getting this error: [info] Compiling 1 Java source to /home/ghost/Bureau/app/play-2.0.1/waf/target/scala-2.9.1/classes... [error] /home/ghost/Bureau/app/play-2.0.1/waf/app/controllers/Application.java:9: error: package org.apache.commons.io does not exist [error] import org.apache.commons.io.*; [error] ^ [error] /home/ghost/Bureau/app/play-2.0.1/waf/app/controllers/Application.java:41: error: cannot find symbol [error] FileUtils.copyFile(file, destinationFile); [error] ^ [error] symbol: variable FileUtils [error] location: class Application [error] 2 errors [error] {file:/home/ghost/Bureau/app/play-2.0.1/waf/}waf/compile:compile: javac returned nonzero exit code [error] application - Play can't find package org.apache.commons.io . How i can i add it as a dependency ?

    Read the article

  • Friday Fun: Ghosruns

    - by Asian Angel
    In this week’s game a huge ghost is on the loose and chasing after your group of humans. Can you successfully beat the Match-3 challenge on each level or will your group become this ghost’s newest friends for eternity? HTG Explains: What Is RSS and How Can I Benefit From Using It? HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online

    Read the article

  • Ways to serve AWS from another domain

    - by mplungjan
    I have installed Ghost on AWS (it is running node) I very much dislike the URL they gave me http://ec2-nn-nnn-nnn-nnn.us-west-2.compute.amazonaws.com/ghost/ I own a domain and linux hosting (but not a VPS) - what would be a practical way to serve my blog via URLS on my own (sub) domain? I can use php and access .htaccess on my domain - possibly do things on the ASW instance too (let me know what to look for)

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Windows Boot disc to save files?

    - by acidzombie24
    Somehow after updating my legit windows 7 OS with no pirated or mod software on my PC i was welcome to a black screen. I popped in my ghost disc and copied the files i need to an external HD. IIRC windows 7 disc can do that too. Problem with the way i did it on ghost was it excepted me to select 1 file (an HD disc image) so i couldnt select multiple folders to move. Also when i did move i had no idea if it finished or how long it would take. My linux live cd couldnt access the HD. Anyways, is there a disc i can use to easily copy files from my laptop to my external HD? I think ghost, windows 7 and windows server all allow me but is there one that is better suited to copy files?

    Read the article

  • how to configure svn between two remote system on different network?

    - by Ghost Answer
    The story is that I have two systems on two different network IPs (like client on 10.0.15.24 and server on 117.152.18.140). I am communicating between them by team viewer port 80. I have installed svn subversion on server and svn client on client system. I also installed apache server on server system. Now my question is that I want to configure them. first is it possible, if yes, then please give the proper suggestion. Is there any changes to be make in firewall on server or client or both. thanks

    Read the article

  • User permission settings on DNS with windows 2003 server R2 standard edition

    - by Ghost Answer
    I have windows server 2003 r2 standard edition and some XP OS clients systems. I have created the DNS and profiles for all user. Now I want to authorized some users to installation of softwares, remove softwares and other such kind of things. How to I make such kind of policies for all different users on DNS. Please help me. May be this question can be same for another but I didn't get the solutions.

    Read the article

  • why does windows authentication / impersonation fail on asp.net application with iis 7.5 / windows 7

    - by velvet sheen
    hi there; i'm troubleshooting why i cannot get past the login dialog on an asp.net site configured for windows authentication and impersonation. help me before i switch to os x development and objective-c i have an asp.net 2.0 application and i'm trying to deploy it on windows 7 with iis 7.5. i've created a new site, and bound it to localhost and a fully qualified domain name. the fqdn is in my hosts file, and is redirected to 127.0.0.1 the site is also running with an appdomain i created, with integrated pipeline mode, and the process model identity is set to ApplicationPoolIdentity. web.config includes the following: <trust level="High" /> <authentication mode="Windows" /> <authorization> <deny users="?"/> </authorization> <identity impersonate="true"/> acl on the directory for the site is desperation set to everyone full control, the application pool virtual account (windows 7 thing) is set to full control on the physical directory for the site also. iis authentication has asp.net impersonation enabled, and windows authentication enabled. when i connect to the site as localhost, it permits me to get past the login prompt and the application loads without incident. when i connect to the site as the fqdn set in the host headers bindings for this site/ip/port, i cannot get past the login prompt. clicking cancel throws to a http 401.1 error page. why? thanks very much in advance.

    Read the article

  • Can I make my drives visible and change their partition type without losing my data?

    - by user165408
    I have made a lot of mistakes and now I cannot see my hard disk nor I can start my operating system on my laptop. All my passwords and important files on my hdd without any backup. I followed this course of action Changed my hard disk partitions to dynamic just for getting 5th partition. (1st mistake) Decreased partitions to 4 again. Backed up operating system from 4th to 3rd partition with Norton Ghost. Booted from a live CD for Windows XP. Formatted 4th partition and moved my all important data from 1st and 2nd partitions to the 4th partition. Deleted 1st and 2nd partitions and got 1 partition from half of empty space. So I have just 3 partitions and empty space between 1st and 2nd partitions. Tried to install Windows 8 to the first partition but it did not allow because it is dynamic. Also it did not allow to install to other partitions. Tried to install Windows XP to the 1st partition but it said if I continue I cannot use other drivers. Therefore I escaped from installing it. Booted from the Windows XP live CD then increased 1st partiton to less than 400mb of empty space. Therefore I thought it will be adjacent but it was shown as 2 partitions. In my computer I see just 3 drivers. Using Norton Ghost I recovered my OS to the 1st partition. (2nd mistake it was on 4th partition originally) Booted from a Windows XP live CD I tried to install bcdedit to the Windows XP live CD but it did not work. Then I tried to install EaseUS Partition Master Home Edition. It was installed with errors then I start it and it showed me an error like there is no hard disk. I looked to my PC and my drivers were not there. Booted from the Norton Ghost CD and it did not show me my drivers either, but before I was able to see them. I checked numbers of partition shown by the Norton Ghost utility and they are still have same numbers so I have to see my drivers but I cannot see them now. My hard disk is shown as extarnal dynamic now so I cannot see any drive in my PC in the live Windows XP. There are two options; first one is import extarnal disk and second one is convert disk to basic. Will they delete my data? I fear booting from CDs like Windows XP live CD, Norton Ghost CD, and the operating system CD/DVD, because they may overwrite a few MB their data to my data. These recover tools are already exist in Windows XP live CD by The Ultimate Boot CD for Windows. Can any of them help me? CompuAppa SwissKnife V3 DBXtract Disk Investigator Fab's AutoBackup 2.0 FileRecovery Floppy Repair Free Undelete Handy Recovery Recovery Manager Restorastion Restorastion Help File by UBCD4Win UnChk Unstoppable Copier Finally How can I make it so that my drives are visible again without losing my data? How can I convert my dynamic partitions to basic without losing my data?

    Read the article

  • Checking the configuration of two systems to determine changes

    - by None
    We are standing up a replicant data center at work and need to ensure that the new data center is configured (nearly) identically to the original. The new data center will be differently addressed and named than the original and will have differing user accounts, but all the COTS, patches, and configurations should be the same. We would normally ghost the original servers and install those images onto the new machines, however, we have a few problematic pieces of COTS that require we install them outside of an image due to how they capture the setup of the network during their installation and maintain it within their configuration information (in some cases storing it in various databases). We have tried multiple times and this piece of COTS cannot be captured within a ghost image unless the destination machine will have an identical network setup (all the same IPs, hostnames, user accounts, etc across the entire network) as the original. In truth, it is the setup of these special COTS that I want to audit the most because they are difficult to install and configure in the first place. In light of the fact that we can’t simply ghost, I’m trying to find a reasonable manner to audit the new data center and check to see if it is setup like the original (some sort of system wide configuration audit or integrity check). I’m considering using something like Tripwire for Servers to capture the configuration on the source machines and then run an audit on the destination machines. I understand that it will still show some differences due to the minor config changes, but I’m hoping that it will eliminate the majority of the work. Here are some of the constraints I’m working under: Data center is comprised of multiple Windows and Linux machines of differing versions (about 20 total) I absolutely cannot ghost or snap any other type of image of these machines … at least not in their final configuration I want to audit the final configuration to ensure all of the COTS, patches, configurations, etc are installed and setup properly (as compared to the original data center) I would rather not install any additional tools on these machines … I’d much rather run it from a standalone machine or off a DVD Price of tools is important but not an impossible burden, however, getting a solution soon is important (I can’t take the time to roll my own tools to do this) For the COTS that stores the network information, I don’t know all of the places it stores the network information … so it would be unlikely I could find a way in the near future to adjust its setup after the installation has occurred Anyone have any thoughts or alternate approaches? Can anyone recommend tools that would be usable for system wide configuration audits?

    Read the article

  • cleaned_data() doesn't have some of the entered data

    - by SC Ghost
    I have a simple form for a user to enter in Name (CharField), Age(IntegerField), and Sex(ChoiceField). However the data that is taken from the Sex choice field is not showing up in my cleaned_data(). Using a debugger, I can clearly see that the data is being received in the correct format but as soon as I do form.cleaned_data() all sign of my choice field data is gone. Any help would be greatly appreciated. Here is the relative code: class InformationForm(forms.Form): Name = forms.CharField() Age = forms.IntegerField() Sex = forms.ChoiceField(SEX_CHOICES, required=True) def get_information(request, username): if request.method == 'GET': form = InformationForm() else: form = RelativeForm(request.POST) if form.is_valid(): relative_data = form.cleaned_data

    Read the article

  • Aptana Studio is opening but not ever closing a python.exe process

    - by SC Ghost
    I am developing a small testing website using Django 1.2 in Aptana Studio build 2.0.4.1268158907. I have a Django project that I test by running the command "runserver 8001" on my project. This command runs the project on a small server that comes with Django. However the problem arises that every time I run this command Aptana opens two instances of the process "python.exe". Upon terminating the command only one of these instances is ended. The other process continues to run and use memory. My server is not online, and the process doesn't seem to do anything that I can find. This happens every time i run the runserver command on my project and therefore more and more python.exe instances will open up through my development period. Any help discovering either the purpose of this extra python.exe or a way to prevent it from opening would be much appreciated.

    Read the article

  • DBCC MEMUSAGE in 2005/8 ?

    - by steveh99999
    I used to like using undocumented command DBCC MEMUSAGE in SQL 2000 to see which tables were using space in SQL data cache. In SQL 2005, this command is not longer present. Instead a DMV – sys.dm_os_buffer_descriptors – can be used to display data cache contents,  but this doesn’t quite give you the same output as DBCC MEMUSAGE. I’m also aware that you can use Quest’s spotlight tool to view a summary of data cache contents. Using  this post by Umachandar Jayachandran  of Microsoft, I was able to create the following equivalent for SQL 2005/8. I’ve wrapped Umachandar’s original query in a CTE to produce summary information :- ;WITH memusage_CTE AS (SELECT bd.database_id, bd.file_id, bd.page_id, bd.page_type , COALESCE(p1.object_id, p2.object_id) AS object_id , COALESCE(p1.index_id, p2.index_id) AS index_id , bd.row_count, bd.free_space_in_bytes, CONVERT(TINYINT,bd.is_modified) AS 'DirtyPage' FROM sys.dm_os_buffer_descriptors AS bd JOIN sys.allocation_units AS au ON au.allocation_unit_id = bd.allocation_unit_id OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.hobt_id = au.container_id AND au.type IN (1, 3) ) AS p1 OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.partition_id = au.container_id AND au.type = 2 ) AS p2 WHERE  bd.database_id = DB_ID() AND bd.page_type IN ('DATA_PAGE', 'INDEX_PAGE') ) SELECT TOP 20 DB_NAME(database_id) AS 'Database',OBJECT_NAME(object_id,database_id) AS 'Table Name', index_id,COUNT(*) AS 'Pages in Cache', SUM(dirtyPage) AS 'Dirty Pages' FROM memusage_CTE GROUP BY database_id, object_id, index_id ORDER BY COUNT(*) DESC I’m not 100% happy with the results of the above query however… I’ve noticed that on a busy BizTalk messageBox database  it will return information on pages that contain GHOST rows – . ie where data has already been deleted but has yet to be cleaned-up by a background process – I’m need to investigate further why cache on this server apparently contains so much GHOST data… For more information on the background ghost cleanup process, see this article by Paul Randall. However, I think the results of this query should still be of interest to a DBA. I have another post to come shortly regarding an example I encountered where this information proved useful to me… I notice in SQL 2008, sys.dm_os_buffer_descriptors gained an extra column – numa_mode – I’m interested to see how this is populated and how useful this column can be on a NUMA-enabled system. I’m assuming in theory you could use this column to help analyse how your tables are spread across Numa-enabled data-cache ?

    Read the article

  • Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4)

    - by hinkmond
    And now here's the Java code that you'll need to read your ghost sensor on your Raspberry Pi The general idea is that you are using Java code to access the GPIO pin on your Raspberry Pi where the ghost sensor (JFET trasistor) detects minute changes in the electromagnetic field near the Raspberry Pi and will change the GPIO pin to high (+3 volts) when something is detected, otherwise there is no value (ground). Here's that Java code: try { /*** Init GPIO port(s) for input ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); for (String gpioChannel : GpioChannels) { System.out.println(gpioChannel); // Reset the port File exportFileCheck = new File("/sys/class/gpio/gpio"+gpioChannel); if (exportFileCheck.exists()) { unexportFile.write(gpioChannel); unexportFile.flush(); } // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write(GPIO_IN); } /*** Read data from each GPIO port ***/ RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; int sleepPeriod = 10; final int MAXBUF = 256; byte[] inBytes = new byte[MAXBUF]; String inLine; int zeroCounter = 0; // Get current timestamp with Calendar() Calendar cal; DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss.SSS"); String dateStr; // Open RandomAccessFile handle to each GPIO port for (int channum=0; channum And, then we just load up our Java SE Embedded app, place each Raspberry Pi with a ghost sensor attached in strategic locations around our Santa Clara office (which apparently is very haunted by ghosts from the Agnews Insane Asylum 1906 earthquake), and watch our analytics for any ghosts. Easy peazy. See the previous posts for the full series on the steps to this cool demo: Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 1) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 2) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 3) Halloween: Season for Java Embedded Internet of Spooky Things (IoST) (Part 4) Hinkmond

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >