Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 389/2505 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • Wiped data, and duplicated folders into files.

    - by Kaustubh P
    Something weird happened today, and I dont know how. Within a folder, all folders have a file by the same name, with a colon appended to it. And all the files from the most inner-most directory in my home, have been dumped to ~, with a size of 0 bytes. I have not executed any scripts or anything. I was just checking out some easter eggs, namely the gegls from outer space and free the fish and was away from the computer and was logged because of the screensaver. I couldnt log-back in with my password, so I just reset the PC, and while booting, the PC went into a drive check. BUT, IIRC, i saw the duplicate "folder files" before I had logged out, so thats not the reason! All the files have a timestamp of 14 Jan. Also, the contents of my eclipse folder have been dumped into ~. Right down to the jars and ini files. HELP!

    Read the article

  • Incorrect Dates for Downloable files in Google Snippets

    - by alds
    We have a website which create publications and newsletters. In most (if not all) the search results for our downloadable files, the Google snippets show dates which are less than when those files were actually published, from one to three months before. It would be impossible since those files did not even exist before the dates mentioned. The dates themselves do not seem to have any significance in our site. Any suggestions where the dates come from?

    Read the article

  • Deleting "undeletable" files in Vista

    - by Nik Reiman
    I recently upgraded my workstation from XP SP3 to Vista Business, and during the upgrade Windows moved my old C:\Windows directory to C:\Windows.old. I got all of the stuff I needed out of that folder, but there are six "undeletable" files there so I cannot remove it. They are: Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-H Windows.old\Program1\Adobe\Reader 9.0\Resource\CMap\Identity-V Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelper.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\AcroPDF.dll Windows.old\Program1\Common Files\Adobe\Acrobat\ActiveX\pdfshell.dll Whenever I try to delete the files either through explorer or a command line, I get a permission denied error. I have tried to grant myself full permission on the files, but again, permission denied. I don't even have acrobat installed on my Vista machine, and I uninstalled Adobe updater. However, I still can't manage to get rid of these files. How do I nuke them for good? Edit: I was able to take ownership of the files, but I still can't delete them. Renaming them did not work, as I was denied permission to do that as well. I'll try booting up in safe mode and getting rid of them there. Edit II: Booting up into safe mode did not allow me to delete the files. Bummer.

    Read the article

  • Finding duplicate files?

    - by ub3rst4r
    I am going to be developing a program that detects duplicate files and I was wondering what the best/fastest method would be to do this? I am more interested in what the best hash algorithm would be to do this? For example, I was thinking of having it get the hash of each files contents and then group the hashes that are the same. Also, should there be a limit set for what the maximum file size can be or is there a hash that is suitable for large files?

    Read the article

  • Unity Dashboard won't find local files, rearrange icons on two computers

    - by Stanton.Sculpture
    Suddenly I can't move icons around my unity launcher and the Dash won't search for my local files and folders. Was working when I first installed 13.10, but now it won't search for local files, and it won't let me rearrange the icons in any way. I've tried turning on and off all the scopes (lenses?) in multiple combinations, but it won't find any files unless I use nautilus to find them its mostly unresponsive. I can't see my recently used files, or files and folders scope at all. Dragging and dropping the icons on the side dock doesn't work, they only stick to my mouse until I put them back where they were. I cannot unlock any icons from the launcher, it just doesn't do anything when I click it. I tried rebooting both of my computers and its still won't function normally. I used ubuntu-bug -w to report a bug, no one has gotten back to me. Is there some option that I changed to cause this? This is a problem on both my laptop and Desktop. Please Help, Alex

    Read the article

  • Hiding recent files in Unity dashboard

    - by Eric
    Ubuntu 13.04 (though had the same issue in both 12.04 LTS and 12.10). Unity desktop (yes I like it, shush). Anyways, when clicking on the dashboard there is a tab for 'Files and Folders'. I don't have any files on this computer that isn't porn. In other words, it displays the images there (as it's supposed to), but I can't have it displaying the porn for obvious reasons. I have disabled 'recent activity' and even added the folder it's all in to the 'do not record activity in the following folders'. I'm assuming that works but as I don't actually have any other files, it still displays them. I don't want to have to make it a hidden folder because it's on an external HDD and causes issues when moving from computer to computer (I have other movies on it as well). TL;DR: Get rid of the 'Files and Folders' tab in the dashboard. Is it possible?

    Read the article

  • How to process large files in NetLogo? [closed]

    - by user65597
    I am running into problems in NetLogo with large *.csv / *.txt files. The documents can consist of about 1 million data sets and I need to read them (to eventually create a diagram based on the data). With the most straightforward source code, my program needs about 2 minutes to process these files. How should I approach reading such large data files faster in NetLogo? Is NetLogo even suitable for such tasks (as it seems to be designed more for teaching and learning)?

    Read the article

  • DevWeek & SQL Social @ London

    - by Davide Mauri
    Yesterday I had my “SQL Server best practices for developers” session at DevWeek and I really enjoyed it a lot. For all those who asked, I’ll put slides and demos online as soon as possible. I’ve just waiting to know where I can put it (on my website or somewhere else), so it should be just a matter of some days. If you attended my session and would like to rate it, please use SpeakerRate here: http://speakerrate.com/talks/2857-sql-server-best-practices-for-developers I also have to thank Simon Sabin for the very nice event he organized for SQLSocial http://sqlblogcasts.com/blogs/simons/archive/2010/02/16/SQLSocial-presents-Itzik-Ben-gan--Greg-Low-and-Davide-Mauri.aspx A lot of people attended and we really had interesting discussions. And it was my first time doing a session at a pub, and I must say it's *really* funny and enjoyable, expecially when you have free beer :-) Now back to Italy to the “usual” work! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • LiveMeeting VC PowerShell PASS – Troubleshooting SQL Server with PowerShell

    - by Laerte Junior
    Guys, join me on Wednesday July 18th 12 noon EDT (GMT -4) for a presentation called Troubleshooting SQL Server With PowerShell. It will be in English, so please make allowances for this. I’m sure that you’re aware that my English is not perfect, but it is not so bad. I will do my best, you can be sure. The registration link will be available soon from PowerShell.sqlpass.org, so I hope to see you there. It will be a session without slides. Just code; pure PowerShell code. Trust me, We will see a lot of COOL stuff.Big thanks to Aaron Nelson (@sqlvariant) for the opportunity! Here are some more details about the presentation: “Troubleshooting SQL Server with PowerShell – The Next Level’ It is normal for us to have to face poorly performing queries or even complete failure in our SQL server environments. This can happen for a variety of reasons including poor Database Designs, hardware failure, improperly-configured systems and OS Updates applied without testing. As Database Administrators, we need to take precaution to minimize the impact of these problems when they occur, and so we need the tools and methodology required to identify and solve issues quickly. In this Session we will use PowerShell to explore some common troubleshooting techniques used in our day-to-day work as s DBA. This will include a variety of such activities including Gathering Performance Counters in several servers at the same time using background jobs, identifying Blocked Sessions and Reading & filtering the SQL Error Log even if the Instance is offline The approach will be using some advanced PowerShell techniques that allow us to scale the code for multiple servers and run the data collection in asynchronous mode.

    Read the article

  • TraceTune: Larger Files and History

    - by Bill Graziano
    I updated TraceTune over the weekend.  I increased the trace file upload size to 20MB.  We’ve processed over half a million rows of trace data so far and I’m confident this won’t kill the server. I added average CPU and average disk reads to the screen that lists the SQL statements in a trace file. I only added these two.  I’m pretty sure average writes isn’t that import.  I’m still thinking about average duration.  I’m trying to balance showing you what you need with a clean, simple interface.  Plus I have a way to see the averages that I describe further down. TraceTune now keeps the last 10 files that you’ve uploaded and will give you some basic details about each file. I think the last change I made is the most interesting. For each SQL statement, I show the history of that statement. You’ll see each trace file where this statement was found.  It will list the averages for CPU, reads, writes and duration.  This will quickly show you if you’re improving the performance of that query.  In my screen shot above you can that even though the execution counts are very different the averages are consistent. If you want to see what queries are consuming the most resources on your server give TraceTune a try.

    Read the article

  • No Sync of Files in Android U1 folder

    - by Oldbwl
    My Desktop (Ubuntu 12.10) and Laptop (Win 7) happily sync files. The Ubuntu One folders are perfectly in sync. I have an Asus Transformer with the Ubuntu One app installed. I can read files from the cloud, and they download to a U1 folder. If I edit the files in the Asus (say a sheet in Kingsoft) it appears they never go back to the cloud unless I physically select the file to be uploaded. Is this correct?

    Read the article

  • Tool or script to detect moved or renamed files on Linux prior to a backup

    - by Pharaun
    Basically I am searching to see if there exists a tool or script that can detect moved or renamed files so that I can get a list of renamed/moved files and apply the same operation on the other end of the network to conserve on bandwidth. Basically disk storage is cheap but bandwidth isn't, and the problem is that the files often will be reorganized or moved around into a better directory structure thus when you use rsync to do the backup, rsync won't notice that its a renamed or moved file and re-transmission it over the network all over again despite having the same file on the other end. So I am wondering if there exists a script or tool that can record where all the files are and their names, then just prior to a backup, it would rescan and detect moved or renamed files, then I can take that list and re-apply the move/rename operation on the other side. Here's a list of the "general" features of the files: Large unchanging files They can be renamed or moved around [Edit:] These all are good answers, and what I end up doing in the end was looking at all of the answers and will be writing some code to deal with this. Basically what I am thinking/working on now is: Using something like AIDE for the "initial" scan and enable me to keep checksums on the files because they are supposed to never change, so it would aid on detecting corruption. Creating an inotify daemon that would monitor these files/directory and recording any changes relating to renames & moving the files around to a log file. There are some edge cases where inotify might fail to record that something happened to the file system, thus there is a final step of using find to search the file system for files that has a change time latter than the last backup. This has several benefits: Checksums/etc from AIDE to be able to check/make sure that some media did not get corrupt Inotify keeps resource usage low and no need to re-scan the filesystem over and over No need to patch rsync; If I have to patch things I can, but I would prefer to avoid patching things to keep the burden lower, (IE don't need to re-patch everytime there is an update). I've used Unison before and its really nice, however I could've sworn that Unison does keep copies around on the filesystem and that its "archive" files can grow to be rather large?

    Read the article

  • How do Windows 7 encrypted files look like?

    - by Sean Farrell
    Ok this is kind of an odd question: How do Windows 7 (Home Premium) encrypted files look like "from the outside"? Now here is the story. An acquaintance of a freind of mine got a nasty virus / scareware. So I wiped out my PC technician cap and went to work on it. What I did was remove the drive from the laptop and put drive into my external drive bay. I scanned the drive and yes it was loaded with stuff. That basically cured the infection and I could start the system back up. To check if it cured the problem I wanted to see the system while running. There where two user accounts, on with a password and one without (both admin users !?!). So I logged into the unprotected user and cleaned up the residual issues, like proxy server to localhost in the browser config. Now I wanted to do the same for the password protected user. What I noticed that from my system and the unprotected user account the files of the protected user looked garbled. The files are something like 12 random alphanum chars, but the folders looked ok. Naive as was thought this might be how encrypted files looked "from the outside". (I never use Microsoft's own security features, so how would I know. TrueCrypt is one big blob.) Since the second user could not be reached, I though sod it and removed the password from the account. (That might have been a mistake, I know.) Now I did the same clean up tasks and all nice and fine; except for the files which where still "encrypted". So I looked into many Windows Encrypted Files recovery posts and not all hope is lost, since I should be able to extract the certificate and with the password regain access to the files. Also note that windows did "only" prompt me that removing the password would be insecure, not that access to encrypted files would be lost, like it is claimed in most recovery articles. Resetting the password did not help and I gave up for the night. The question that nagged me half of the last night was, what if the files are not encrypted, but the scare-ware encrypted / destroyed the files? I don't want to spend hours of work trying to recover files that are not recoverable. The ting is that the user does not remember turning it on and aren't the files marked in blue and the filename is readable? Many thanks for input from users who have more knowledge about WEF...

    Read the article

  • Move SQL Server transaction log to another disk

    - by Jim Lahman
    When restoring a database backup, by default, SQL Server places the database files in the master database file directory.  In this example, that location is in L:\MSSQL10.CHTL\MSSQL\DATA as shown by the issuance of sp_helpfile   Hence, the restored files for the database CHTL_L2_DB are in the same directory     Per SQL Server best practices, the log file should be on its own disk drive so that the database and log file can operate in a sequential manner and perform optimally. The steps to move the log file is as follows: Record the location of the database files and the transaction log files Note the future destination of the transaction log file Get exclusive access to the database Detach from the database Move the log file to the new location Attach to the database Verify new location of transaction log Record the location of the database file To view the current location of the database files, use the system stored procedure, sp_helpfile 1: use chtl_l2_db 2: go 3:   4: sp_helpfile 5: go   Note the future destination of the transaction log file The future destination of the transaction log file will be located in K:\MSSQLLog   Get exclusive access to the database To get exclusive access to the database, alter the database access to single_user.  If users are still connected to the database, remove them by using with rollback immediate option.  Note:  If you had a pane connected to the database when the it is placed into single_user mode, then you will be presented with a reconnection dialog box. 1: alter database chtl_l2_db 2: set single_user with rollback immediate 3: go Detach from the database   Now detach from the database so that we can use windows explorer to move the transaction log file 1: use master 2: go 3:   4: sp_detach_db 'chtl_l2_db' 5: go   After copying the transaction log file re-attach to the database 1: use master 2: go 3:   4: sp_attach_db 'chtl_l2_db', 5: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB.MDF', 6: 'K:\MSSQLLog\CHTL_L2_DB_4.LDF', 7: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_1.NDF', 8: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_2.NDF', 9: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_3.NDF' 10: GO

    Read the article

  • Uploading Files Using ASP.NET Web Forms, Generic Handler and jQuery

    - by bipinjoshi
    In order to upload files from the client machine to the server ASP.NET developers use FileUpload server control. The FileUpload server control essentially renders an INPUT element with its type set to file and allows you to select one or more files. The actual upload operation is performed only when the form is posted to the server. Instead of making a full page postback you can use jQuery to make an Ajax call to the server and POST the selected files to a generic handler (.ashx). The generic handler can then save the files to a specified folder. The remainder of this post shows how this can be accomplished.http://www.bipinjoshi.net/articles/f2a2f1ee-e18a-416b-893e-883c800f83f4.aspx      

    Read the article

  • No inodes left error, df -i command says contrary

    - by abhinavkulkarni
    I copied a lot of files in my mounted Windows drive from Ubuntu and I subsequently ran into Error opening file '/media/windows/<some-file-path>': No space left on device error. I checked the output of df -i command to see if I had ran out of inodes for the mounted Windows drive: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda5 2363904 504119 1859785 22% / udev 207621 522 207099 1% /dev tmpfs 211487 450 211037 1% /run none 211487 3 211484 1% /run/lock none 211487 7 211480 1% /run/shm none 211487 19 211468 1% /run/user /dev/sda2 458686680 2588876 456097804 1% /media/windows As above output shows, lots of inodes are available for /media/windows drive. I have plenty of disk space left - around 500GB. What's the problem then?

    Read the article

  • How do .so files avoid problems associated with passing header-only templates like MS dll files have?

    - by Doug T.
    Based on the discussion around this question. I'd like to know how .so files/the ELF format/the gcc toolchain avoid problems passing classes defined purely in header files (like the std library). According to Jan in that answer, the dynamic linker/loader only picks one version of such a class to load if its defined in two .so files. So if two .so files have two definitions, perhaps with different compiler options/etc, the dynamic linker can pick one to use. Is this correct? How does this work with inlining? For example, MSVC inlines templates aggressively. This makes the solution I describe above untenable for dlls. Does Gcc never inline header-only templates like the std library as MSVC does? If so wouldn't that make the functionality of ELF described above ineffective in these cases?

    Read the article

  • Approach to retrieve files from server

    - by Aerus
    I'm in the process of making a Java application with a corresponding update application. At any given time the user may want to update the application and the updater will ask for a list of files of the latest release. Based on this list, the updater can determine which files need to be downloaded to complete the update. I now have 2 approaches to solve this, but i would like to know what approach will put the least stress on my application and server. I could send a list of files i want to download to my server and the server zips the files and simply returns this compressed file to the application. The updater sents a request for each seperate file to the server, which simply returns the file The application will be used mainly in Belgium and The Netherlands and connections/bandwidth tend to be pretty decent in here. The average size of a single file should be around 100Kb and at most 1Mb. I expect an update to have anywhere between 10 to 50 new files. I expect at most 100 persons/day to update the application, i.e. in the week when a new version is released. I hope this is enough information to sketch my problem and any advice is welcome. If there is another common way to tackle this, i'd be glad to hear it.

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

  • Tables in the SQL Server "master" database, will they cause problems?

    - by pepoluan
    Folks, please be kind on me... I'm just an 'accidental' DBA due to our DBA resigned, so I'm totally a newbie in DBA... You see, I have this application, "ESET Remote Administration Server" (ERAS) that stores its logs and analysis on (originally) a local Access database. The decision was to migrate its database to a SQL Server 2008 R2 machine. ESET (the maker of the software) helpfully provided tools to perform such migration; unfortunately, being the DBA neophyte that I am, I didn't realize that I have to first create my own database (on the SQL Server side) and assign that database as the 'default' database for ERAS' ODBC connection. Now, the migration tool had successfully created a whole bunch of tables inside the "master" database. My questions: Should I leave things be as it is, or should I re-migrate the ERAS database to a different database? If you suggest me perform a re-migration, my plan is to (1) create a new instance, (2) create a new database within the new instance, (3) create a new ODBC System DSN on the ERAS server pointing to the new DB in step 2, (4) use ESET's migration tool to migrate from the current DSN to the new DSN. Do you think I missed a step there? Thanks beforehand for any guidance.

    Read the article

  • Converting MOD files to quicktime or mpeg for adobe premiere pro

    Ive been Editing lots of videos lately. My company got a video camera: Canon Legria FS200. It saves the movies in a digital format as MOD files. Unfortunately, Adobe Premiere doesnt work with these files. I needed software to convert MOD files to QuickTime or mpeg files. I found a good free one : Its called Mpeg StreamClip:  It works well. and its pretty fast. And its Free. Whats not to like? ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • String manipulation functions in SQL Server 2000 / 2005

    - by Vipin
    SQL Server provides a range of string manipulation functions. I was aware of most of those in back of the mind, but when I needed to use one, I had to dig it out either from SQL server help file or from google. So, I thought I will list some of the functions which performs some common operations in SQL server. Hope it will be helpful to you all. Len (' String_Expression' ) - returns the length of input String_Expression. Example - Select Len('Vipin') Output - 5 Left ( 'String_Expression', int_characters ) - returns int_characters characters from the left of the String_Expression.     Example - Select Left('Vipin',3), Right('Vipin',3) Output -  Vip,  Pin  LTrim ( 'String_Expression' ) - removes spaces from left of the input 'String_Expression'  RTrim ( 'String_Expression' ) - removes spaces from right of the input 'String_Expression' Note - To removes spaces from both ends of the string_expression use Ltrim and RTrim in conjunction Example - Select LTrim(' Vipin '), RTrim(' Vipin ') , LTrim ( RTrim(' Vipin ')) Output - 'Vipin ' , ' Vipin' , 'Vipin' (Single quote marks ' ' are not part of the SQL output, it's just been included to demonstrate the presence of space at the end of string.) Substring ( 'String_Expression' , int_start , int_length ) - this function returns the part of string_expression. Right ( 'String_Expression', int_characters ) - returns int_characters characters from the right of the String_Expression.

    Read the article

  • Preferred apache permissions for www files with several authors

    - by user1316464
    I can't for the life of me figure out how to design my permissions scheme for my apache files. My requirements seem pretty simple: Apache should have standard permissions of RX for Directories and R for files Web authors should have RWX for Directories and RW for files Don't want to give any access to "other" Want new files/folders to inherit the proper permissions Here are the schemes I've tried 570 for directories and 460 for files Owner: Apache Group: Webdev The problem here is that new files created by users int the Webdev group are owned by user:Webdev and Apache can't read them. If Apache were in the group Webdev then it would also have the wrong permissions (ie it would have Write permissions to files) 750 for directories and 640 for files Owner: Webdev Group: Apache (Webdev is a member of Apache) The problem here is that there is only one webdev account and I have multiple people who need access to contribute. In theory this would work with only one developer if Webdev were also a member of the Apache group. Any ideas?

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >