Search Results

Search found 61297 results on 2452 pages for 'open files'.

Page 628/2452 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • How do set up jquery to exclude classes in a function?

    - by user1497158
    I essentially only understand how to read bits of javascript and make modifications. I am using grid-slider, a script I purchased, but the code writer is mia at the moment, so hopefully someone here can help me. Basically, it's a slider and there are options to have links open like normal or to have links open in a panel on the same page. I want some links to open in a panel and others to open in the parent window. It seems to me that all that would be required to do this would be to activate the panel display function (which I've done) and then set up an exclude function to exclude certain uls or lis with a specific class from the function. I've read about the .not selector, but I don't see how to make it applicable to this code: enter code hereelse { enter code here if (this._displayOverlay) { if ($item.find(">.content").size() > 0) { $item.data("type", "static"); } else { var contentType = this.getContentType($link); var url = $link.attr("href"); $item.data({type:contentType, url:(typeof url != "undefined") ? url : ""}); } $item.css("cursor", "pointer").bind("click", {elem:this, i:i}, this.openOverlay); } $link.data("text", $item.find(">div:first").html()); $img = $link.find(">img"); } Can anyone help based on looking at this? Here's a link to the demo of the code: http://codecanyon.net/item/jquery-grid-style-slider/full_screen_preview/1204040 Thank you.

    Read the article

  • ffmpeg mp3 convertion

    - by Alex
    was to convert several thousand MP3 files, and get this for some files: ffmpeg -t 45 -i "public/system/musics/files/2009/original/03_Memphis.mp3" -y "memphis.mp3" FFmpeg version 0.5-svn17737+3:0.svn20090303-1ubuntu6, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --extra-version=svn17737+3:0.svn20090303-1ubuntu6 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --disable-stripping --disable-vhook --enable-libdc1394 --disable-armv5te --disable-armv6 --disable-armv6t2 --disable-armvfp --disable-neon --disable-altivec --disable-vis --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Apr 10 2009 23:18:41, gcc: 4.3.3 public/system/musics/files/2009/original/03_Memphis.mp3: could not find codec parameters what could be the problem?

    Read the article

  • How can I speed up Subversion checkins? (Using ANKH, latest, Visual Studio 2010)

    - by Timothy Khouri
    I've started working on a new web project with some friends... we are using the latest Subversion server (installed last week), the latest version of ANKH. My web project is a whapping 1.5 megabytes (that's with all images, css files, dll's after compiling, pdb files... etc). Checking in even super small changes (literally adding the letter "x" to a few files for testing)... takes FOREVER! (about 10 seconds - I almost killed myself). The ANKH client is measuring in BYTES PER SECOND ... BYTES? per second... I must be doing something wrong. Does anyone what config file has a joke totallyMessWithPeople=true so that I can turn that off or something? Oh, also, changing one "big" file of a super 10k gains speed up to nearly the speed of light (which is apparently 857 bytes per second). Help me obi wan kenobi, your my only hope! EDIT: As a note... my real work project that uses Visual Source Safe 2005 (I know, ouch) uploads files at about 200-500kbps from this very same computer/internet connection.

    Read the article

  • Path to XML DTD for DBUnit in multi-module Java/Maven project?

    - by HDave
    I have a multi-module maven project. Within the persist module I have a number of XML files data files that reference a DTD: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE myapp-data SYSTEM "myapp-data.dtd" > <dataset> .....omitted for brevity.... </dataset> The DTD is stored in the same directory with the XML files and even Eclipse reports these XML files as valid. However, when I run the application, the DBUnit FlatXMLDataSet throws a FileNotFound exception because it cannot located the DTD. It is apparently looking for the DTD in the root project directory (e.g. myproject/). I would have expected it to look for the DTD in the same directory as the XML file itself (e.g. myproject/persist/target/test-data). Looking at the DBUnit source code, it has this to say about it "Relative DOCTYPE uri are resolved from the current working dicrectory." What's a good way to fix this?

    Read the article

  • Passing Binary Data to a Stored Procedure in SQL Server 2008

    - by Joe Majewski
    I'm trying to figure out a way to store files in a database. I know it's recommended to store files on the file system rather than the database, but the job I'm working on would highly prefer using the database to store these images (files). There are also some constraints. I'm not an admin user, and I have to make stored procedures to execute all the commands. This hasn't been of much difficulty so far, but I cannot for the life of me establish a way to store a file (image) in the database. When I try to use the BULK command, I get an error saying "You do not have permission to use the bulk load statement." The bulk utility seemed like the easy way to upload files to the database, but without permissions I have to figure a work-a-round. I decided to use an HTML form with a file upload input type and handle it with PHP. The PHP calls the stored procedure and passes in the contents of the file. The problem is that now it's saying that the max length of a parameter can only be 128 characters. Now I'm completely stuck. I don't have permissions to use the bulk command and it appears that the max length of a parameter that I can pass to the SP is 128 characters. I expected to run into problems because binary characters and ascii characters don't mix well together, but I'm at a dead end... Thanks

    Read the article

  • Handling file uploads with JavaScript and Google Gears, is there a better solution?

    - by gnarf
    So - I've been using this method of file uploading for a bit, but it seems that Google Gears has poor support for the newer browsers that implement the HTML5 specs. I've heard the word deprecated floating around a few channels, so I'm looking for a replacement that can accomplish the following tasks, and support the new browsers. I can always fall back to gears / standard file POST's but these following items make my process much simpler: Users MUST to be able to select multiple files for uploading in the dialog. I MUST be able to receive status updates on the transmission of a file. (progress bars) I would like to be able to use PUT requests instead of POST I would like to be able to easily attach these events to existing HTML elements using JavaScript. I.E. the File Selection should be triggered on a <button> click. I would like to be able to control response/request parameters easily using JavaScript. I'm not sure if the new HTML5 browsers have support for the desktop/request objects gears uses, or if there is a flash uploader that has these features that I am missing in my google searches. An example of uploading code using gears: // select some files: var desktop = google.gears.factory.create('beta.desktop'); desktop.openFiles(selectFilesCallback); function selectFilesCallback(files) { $.each(files,function(k,file) { // this code actually goes through a queue, and creates some status bars // but it is unimportant to show here... sendFile(file); }); } function sendFile(file) { google.gears.factory.create('beta.httprequest'); request.open('PUT', upl.url); request.setRequestHeader('filename', file.name); request.upload.onprogress = function(e) { // gives me % status updates... allows e.loaded/e.total }; request.onreadystatechange = function() { if (request.readyState == 4) { // completed the upload! } }; request.send(file.blob); return request; } Edit: apparently flash isn't capable of using PUT requests, so I have changed it to a "like" instead of a "must".

    Read the article

  • ReportServer 2005 Custom Assembly Error

    - by user752083
    We have a reportserver on sql server 2005, it has previously been working on computer X which we are replacing with a computer Y. Computer X only had one default sql instance as our test environment, machine Y is a new machine with 2 sql instances, Y\TEST and Y\DEV. Both machines run Windows Server 2003 and Both have Sql Server 2005 installed together with Report Server We have not currently done any work on the DEV instance, just on the TEST instance. The reportserver is installed for TEST. SSRS previously worked as intended on X machine, and on Y\TEST its working for reports not using any custom code. However for some of the reports we are loading a custom assembly Localization. This assembly currently exists in following folders on server: C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PublicAssemblies C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies C:\Program Files\Microsoft SQL Server\MSSQL.2\Reporting Services\ReportServer\bin Also in rssrvpolicy.config, the entry for Report_Expressions_Default_Permissions has been changed from Execution to FullTrust (Although this was not necessary on the machine X). When attempting to upload the rdl files through the web interface, we get the following errors: Error while loading code module: ‘Localization, Version=1.0.0.0, Culture=neutral, PublicKeyToken=e302ddd55ecd694a’. Details: Could not load file or assembly 'Localization, Version=1.0.0.0, Culture=neutral, PublicKeyToken=e302ddd55ecd694a' or one of its dependencies. The system cannot find the file specified. (rsErrorLoadingCodeModule) Get Online Help There is an error on line 14 of custom code: [BC30451] Name 'Localization' is not declared. (rsCompilerErrorInCode) Get Online Help Does anyone know any other places with potential errors?

    Read the article

  • Python: saving and loading objects and using pickle.

    - by Peterstone
    Hello, I´m trying to save and load objects using pickle module. First I declare my objects: >>> class Fruits:pass ... >>> banana = Fruits() >>> banana.color = 'yellow' >>> banana.value = 30 After that I open a file called 'Fruits.obj'(previously I created a new .txt file and I renamed 'Fruits.obj'): >>> import pickle >>> filehandler = open(b"Fruits.obj","wb") >>> pickle.dump(banana,filehandler) After do this I close my session and I began a new one and I put the next (trying to access to the object that it supposed to be saved): file = open("Fruits.obj",'r') object_file = pickle.load(file) But I have this message: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python31\lib\pickle.py", line 1365, in load encoding=encoding, errors=errors).load() ValueError: read() from the underlying stream did notreturn bytes I don´t know what to do because I don´t understand this message. Does anyone know How I can load my object 'banana'? Thank you! EDIT: As some of you have sugested I put: >>> import pickle >>> file = open("Fruits.obj",'rb') There were no problem, but the next I put was: >>> object_file = pickle.load(file) And I have error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python31\lib\pickle.py", line 1365, in load encoding=encoding, errors=errors).load() EOFError

    Read the article

  • ant ftp task "Could not date test remote file"

    - by avok00
    Hi guys! I am using Ant ftp task to deploy my project files to a remote app server. Ant is not able to detect the date of the remote file and it re-uploads all files every time. When I start Ant in debug mode it says: [ftp] checking date for mailer.war [ftp] Could not date test remote file: mailer.war assuming out of date. The remote server is MS FTP (Windows Vista version) Ant version is 1.8.2; I use commons-net-2.2 and jakarta-oro-2.0.8 (could not find newer version) My ant task looks like this <!-- Deploy new and changed files --> <target name="deploy" depends="package" description="Deploy new and changed files"> <ftp server="localhost" userid="" password="" action="send" depends="yes" passive="true" systemTypeKey="WINDOWS" serverTimeZoneConfig="Europe/Sofia" defaultDateFormatConfig="MMM dd yyyy" recentDateFormatConfig="MMM dd HH:mm" binary="true" retriesAllowed="3" verbose="true"> <fileset dir="${webapp.artefacts.path}"/> </ftp> </target> I read an article here: Ant:The definitive guide that says I need a version of jakarta oro AFTER 2.0.8 to talk to MS FTP servers, but I was not able to find such version. Jakarta oro site - http://jakarta.apache.org/oro/ says the oro project is retired as of 2010, but their latest distribution is from 2003! Please, can anyone help me with this? Any solution or any alternatives to the Ant ftp task? Thanks!

    Read the article

  • Why qry.post executed with asynchronous mode?

    - by Ryan
    Recently I met a strange problem, see code snips as below: var sqlCommand: string; connection: TADOConnection; qry: TADOQuery; begin connection := TADOConnection.Create(nil); try connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False'; connection.Open(); qry := TADOQuery.Create(nil); try qry.Connection := connection; qry.SQL.Text := 'Select * from aaa'; qry.Open; qry.Append; qry.FieldByName('TestField1').AsString := 'test'; qry.Post; beep; finally qry.Free; end; finally connection.Free; end; end; First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1. We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed. who can explain this problem , it troubles me so long time. thanks !!! BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7

    Read the article

  • Alternative to 'Dispatch for ASP' deployment plug-in?

    - by Django Reinhardt
    Hi there, we've recently stumbled across the excellent Dispatch for ASP deployment plug in. It looks great apart from one thing: It doesn't work with Visual Studio 2010, at least for us, anyway. (It's supposed to work fine.) (Yes, we've tried everything: We've managed to get Dispatch working for another FTP site, but not the main one we regularly deploy to. We have managed to connect to our main site through FileZilla FTP, so the site itself is configured correctly. All settings have been triple checked, but the software still throws up weird errors (always to do with its internal libraries).) So does anyone know of any other comparable FTP-based, deployment plug-ins for Visual Studio? Here's what Dispatch does (and so any suggested replacement must do): Monitor any altered files in the project. When a file is changed, it's added to a list of files to be deployed. To deploy these files to the live site, all we need to do is click "Upload" and the plugin will connect via FTP to our live site and upload all the files. We can filter out any filenames we don't want to be monitored/uploaded (e.g. .cs or web.config or /Images/, etc.) I think that's all the features that we need. Thanks for any suggestions!

    Read the article

  • Internet explorer 8 opens file in browser instead of the client

    - by Rogier
    Our company is working with a great Business Intelligence tool CorVu 4.2 to analyse the operational and strategic data. Since several years we are successfully working with Sharepoint 2007 to collaborate and share information with colleagues. Most of my colleagues are working with Internet Explorer 7, but step by step Internet Explorer 8 is implemented in the company. We share a lot of CorVu files thought Sharepoint, but since we are using Internet Explorer 8, we have a problem that is new for us. If we click on a CorVu file in Internet Explorer 8 (not necessarily in Sharepoint) a pop-up shows how to open the file, if we save the file, there is no problem. But if we open the file, the file is shown in the browser and not in the CorVu client! See the screenshot below: link (I removed some unnecessary information) So far my colleagues accept this 'feature' in Internet Explorer 8. But I we open and closes more CorVu files, multiple errors (more than 10) show up starting with: (unable to place more hyperlinks) By pressing Enter the errors disappear, but it's not professional! I contacted the creators of CorVu, but they don't have a solution for in their client. There may be a solution in Internet Explorer 8? The extensions of a CorVu file can be a .sqy, .tab or .qrp. But is it possible to force the files to open in the standard client instead of the browser?

    Read the article

  • Looking for a web based, embeddable file manager

    - by Kristi H.
    I'm looking for a web based, embeddable file manager. I haven't found anything suitable through Google or the Stack Overflow archives. Does anyone know of a file manager that meets the following criteria? The file manager must… …be embeddable (e.g. Flash or a Java applet) …run from my server (no storing uploads in a remote location) …be licensed for business use (fees are okay) …let me disable uploads. Users shouldn't be able to upload files, only download them …allow me to tag files and/or place them in folders …let me disable authentication or handle it from the backend. My app already has a login system and I don't want to make users log in twice …allow users to select and download multiple files …have an attractive interface The file manager should… …have an external config file …display thumbnail previews for files …be skinnable …be actively developed or at least actively maintained Thanks for your help. I need a sophisticated file manager and I'd like to avoid writing it from scratch.

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • Reoccurring error "The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to

    - by tuseau
    Hi, I keep receiving this error in my ASP.NET web app (below). I give the Network Service account rights to the specified folder, it runs fine for a while, but then within a day or two the error reoccurs, as the Network Service account has been removed from the rights for the folder. Adding it again fixes it, but why does it keep reocurring? Could it be anything to do with using Interop components (such as WMI)? Here's the full error: Server Error in '/DriveMonitor' Application. The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [HttpException (0x80004005): The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'.] System.Web.HttpRuntime.SetUpCodegenDirectory(CompilationSection compilationSection) +8918190 System.Web.HttpRuntime.HostingInit(HostingEnvironmentFlags hostingFlags) +152 [HttpException (0x80004005): The current identity (NT AUTHORITY\NETWORK SERVICE) does not have write access to 'C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files'.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +8890735 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +259

    Read the article

  • How can I reorder an mbox file chronologically?

    - by Joshxtothe4
    Hello, I have a single spool mbox file that was created with evolution, containing a selection of emails that I wish to print. My problem is that the emails are not placed into the mbox file chronologically. I would like to know the best way to place order the files from first to last using bash, perl or python. I would like to oder by received for files addressed to me, and sent for files sent by me. Would it perhaps be easier to use maildir files or such? The emails currently exist in the format: From [email protected] Fri Aug 12 09:34:09 2005 Message-ID: <[email protected]> Date: Fri, 12 Aug 2005 09:34:09 +0900 From: me <[email protected]> User-Agent: Mozilla Thunderbird 1.0.6 (Windows/20050716) X-Accept-Language: en-us, en MIME-Version: 1.0 To: someone <[email protected]> Subject: Re: (no subject) References: <[email protected]> In-Reply-To: <[email protected]> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 8bit Status: RO X-Status: X-Keywords: X-UID: 371 X-Evolution-Source: imap://[email protected]/ X-Evolution: 00000002-0010 Hey the actual content of the email someone wrote: > lines of quotedtext I am wondering if there is a way to use this information to easily reorganize the file, perhaps with perl or such.

    Read the article

  • Eclipse - Import existing mult-rep CVS project folder

    - by iQ
    Hey guys, Wondering if anyone can help me out with eclipse in terms of importing an existing CVS managed project. I am currently trying to shift my work on to the eclipse IDE. Some details about my project and environment below. I'm working in Linux Ubuntu, the project folder is located on a mounted shared network drive, I have installed the "Eclipse CVS Client" plug-in for my version of eclipse (helios). I've tried many ways for eclipse to use my existing folder as a project and recognize the CVS data in the CVS folders. I have done the following options: Created a new project, selected existing source, located my project folder and clicked OK to finish creating. In the end the CVS files weren't automatically read. Did the same as above and after project creation I wen to the option "project menu-team-share project", it asks me to choose a repository and doesn't automatically find the CVS information in the subfolders. If your wondering I have set-up both repositories in my eclipse and can browse the repositories through the CVS browser. My project directory layout is like this: +-Project Folder (no CVS folder at this level) +---Repo A folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo A + +---Repo B folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo B + +-(couple of random files, not in CVS) Thanks for the help

    Read the article

  • The instruction at "0x7c910a19" referenced memory at "oxffffffff". The memory could not be "read"

    - by ClareBear
    Hello guys/girls The instruction at "0x7c910a19" referenced memory at "oxffffffff". The memory could not be "read" I have a small issue, I receive the error above before the .vbs terminates. I don't know why this error is thrown. Below is the process of the .vbs file: Call ImportTransactions() Call UpdateTransactions() Function ImportTransactions() Dim objConnection, objCommand, objRecordset, strOracle Dim strSQL, objRecordsetInsert Set objConnection = CreateObject("ADODB.Connection") objConnection.Open "DSN=*****;UID=*****;PWD==*****;" Set objCommand = CreateObject("ADODB.Command") Set objRecordset = CreateObject("ADODB.Recordset") strOracle = "SELECT query here from Oracle database" objCommand.CommandText = strOracle objCommand.CommandType = 1 objCommand.CommandTimeout = 0 Set objCommand.ActiveConnection = objConnection objRecordset.cursorType = 0 objRecordset.cursorlocation = 3 objRecordset.Open objCommand, , 1, 3 If objRecordset.EOF = False Then Do Until objRecordset.EOF = True strSQL = "INSERT query here into SQL database" strSQL = Query(strSQL) Call RunSQL(strSQL, objRecordsetInsert, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) objRecordset.MoveNext Loop End If objRecordset.Close() Set objRecordset = Nothing Set objRecordsetInsert = Nothing End Function Function UpdateTransactions() Dim strSQLUpdateVAT, strSQLUpdateCodes Dim objRecordsetVAT, objRecordsetUpdateCodes strSQLUpdateVAT = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1)" Call RunSQL(strSQLUpdateVAT, objRecordsetVAT, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) strSQLUpdateCodes = "UPDATE query here SET [value:costing output] = ([value:costing output] * -1) different WHERE clause" Call RunSQL(strSQLUpdateCodes, objRecordsetUpdateCodes, False, conTimeOut, conServer, conDatabase, conUsername, conPassword) Set objRecordsetVAT = Nothing Set objRecordsetUpdateCodes = Nothing End Function UDPATE: If I exit the function after I open the connection (see below) it still causes the same error. Function ImportTransactions() Dim objConnection, objCommand, objRecordset, strOracle Dim strSQL, objRecordsetInsert Set objConnection = CreateObject("ADODB.Connection") objConnection.Open "DSN=*****;UID=*****;PWD==*****;" Set objCommand = CreateObject("ADODB.Command") Set objRecordset = CreateObject("ADODB.Recordset") Exit Function End Function It does both the import and update and seems to throw this error after. Thanks in advance for any help, Clare

    Read the article

  • How to detect changing directory size in Perl

    - by materiamage
    Hello, I am trying to find a way of monitoring directories in Perl, in particular the size of a directory, and upon detecting a change in directory size, perform a particular action. The issue I have is with large files that require a noticeable amount of time to copy into this directory, i.e. 100MB. What happens (in Windows, not Unix) is the system reserves enough disk space for the entire file, even though the file is still copying in progress. This causes problems for me, because my script will try to perform an action on this file that has not finished copying over. I can easily detect directory size changes in Unix via 'du', but 'du' in Windows does not behave the same way. Are there any accurate methods of detecting directory size changes in Perl? Edit: Some points to clarify: - My Perl script is only monitoring a particular directory, and upon detecting a new file or a new directory, perform an action on this new file or directory. It is not copying any files; users on the network will be copying files into the directory I am monitoring. - The problem occurs when a new file or directory appears (copied, not moved) that is significantly large ( 100MB, but usually a couple GB) and my program fires before this copy completes - In Unix I can easily 'du' to see that the file/directory in question is growing in size, and take the appropriate action - In Windows the size is static, so I cannot detect this change - opendir/readdir/closedir is not feasible, as some of the directories that appear may contain thousands of files, and I want to avoid the overhead of Ideally I would like my program to be triggered on change, but I am not sure how to do this. As of right now it busy waits until it detects a change. The change in file/directory size is not in my control.

    Read the article

  • Archiver Securing SQLite Data without using Encryption on iPhone

    - by Redrocks
    I'm developing an iphone app that uses Core Data with a SQLite data store and lots of images in the resource bundle. I want a "simple" way to obfuscate the file structure of the SQLite database and the image files to prevent the casual hacker/unscrupulous developer from gaining access to them. When the app is deployed, the database file and image files would be obfuscated. Upon launching the app it would read in and un-obfuscate the database file, write the un-obfuscated version to the users "tmp" directory for use by core data, and read/un-obfuscate image files as needed. I'd like to apply a simple algorithm to the files that would somehow scramble/manipulate the file data so that the sqlite database data isn't discernible when the db is opened in a text editor and so that neither is recognized by other applications (SQLite Manager, Photoshop, etc.) It seems, from the information I've read, that I could use NSFileManager, NSKeyedArchiver, and NSData to accomplish this but I'm not sure how to proceed. Been developing software for many years but I'm new to everything CocoaTouch, Mac and iPhone. Also never had to secure/encrypt my data so this is new. Any thoughts, suggestions, or links to solutions are appreciated.

    Read the article

  • using a database and deploying the application

    - by evan
    I have a WPF application that stores a large amount of information in XML files and as the user uses the application they add more information to the XML files. It's basically using the XML files as a database. Since over the life of the program the XML files have gotten quite large, and I've been think about putting the data on a website, I've been looking into how to move all the information into an SQL database. I've used SQL databases with web applications (PHP, Ruby, and ASP.NET) but never with a Desktop application. Ideally I'd like to be able to keep all the information in one database file and distribute it along with the application without requiring the user to connect to a remote database (so they don't need an internet connection - though eventually it would be nice if could compare the local file's version with one online somewhere and update if necessary) and without making them install a local database server on their computer. Is this possible? I'd also like to use LINQ with any new database solution so switching to a database doesn't force to many changes (I read the XML with LINQ). I'm sure this question has been asked and that there are already some good tutorials on the subject but I just can't find them.

    Read the article

  • ADODB.Connection undefined

    - by Wes Groleau
    Reference http://stackoverflow.com/questions/1690622/excel-vba-to-sql-server-without-ssis After I got the above working, I copied all the global variables/constants from the routine, which included Const CS As String = "Driver={SQL Server};" _ & "Server=**;" _ & "Database=**;" _ & "UID=**;" _ & "PWD=**" Dim DB_Conn As ADODB.Connection Dim Command As ADODB.Command Dim DB_Status As Stringinto a similar module in another spreadsheet. I also copied Sub Connect_To_Lockbox() If DB_Status < "Open" Then Set DB_Conn = New Connection DB_Conn.ConnectionString = CS DB_Conn.Open ' problem! DB_Status = "Open" End If End SubI added the same reference (ADO 2.8) The first spreadsheet still works; the seccond at DB_Conn.Open pops up "Run-time error '-214767259 (80004005)': [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified" Removing the references on both, saving files, re-opening, re-adding the references doesn't help. The one still works and the other gets the error. ?!?

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • Query broke down and left me stranded in the woods

    - by user1290323
    I am trying to execute a query that deletes all files from the images table that do not exist in the filters tables. I am skipping 3,500 of the latest files in the database as to sort of "Trim" the table back to 3,500 + "X" amount of records in the filters table. The filters table holds markers for the file, as well as the file id used in the images table. The code will run on a cron job. My Code: $sql = mysql_query("SELECT * FROM `images` ORDER BY `id` DESC") or die(mysql_error()); while($row = mysql_fetch_array($sql)){ $id = $row['id']; $file = $row['url']; $getId = mysql_query("SELECT `id` FROM `filter` WHERE `img_id` = '".$id."'") or die(mysql_error()); if(mysql_num_rows($getId) == 0){ $IdQue[] = $id; $FileQue[] = $file; } } for($i=3500; $i<$x; $i++){ mysql_query("DELETE FROM `images` WHERE id='".$IdQue[$i]."' LIMIT 1") or die("line 18".mysql_error()); unlink($FileQue[$i]) or die("file Not deleted"); } echo ($i-3500)." files deleted."; Output: 0 files deleted. Database contents: images table: 10,000 rows filters table: 63 rows Amount of rows in filters table that contain an images table id: 63 Execution time of php script: 4 seconds +/- 0.5 second Relevant DB structure TABLE: images id url etc... TABLE: filter id img_id (CONTAINS ID FROM images table) etc...

    Read the article

  • Unit Testing a rails 2.3.5 plugin

    - by brad
    I'm writing a new plugin for a rails 2.3.5 app. I've included an app directory (which makes it an engine) so i can easily load some extra routes. Not sure if that affects anything. Anyway, in the test directory i have two files: test_helper.rb and my_plugin_test.rb These files were generated automatically using script/generate plugin my_plugin When I go to vendor/plugins/my_plugin directory and run rake test they don't seem to run. I get the following console output: (in /Users/me/Repos/my_app/source/trunk/vendor/plugins/my_plugin) /Users/me/.rvm/rubies/jruby-1.4.0/bin/jruby -I"lib:lib:test" "/Users/me/.rvm/gems/jruby-1.4.0/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/my_plugin_test.rb" So it obviously sees my test file, but none of the tests inside get run, I just get back to my console prompt. What am I missing here? I figured the generated code would work out of the box Here are the two files test_helper.rb require 'rubygems' require 'active_support' require 'active_support/test_case' my_plugin_test.rb require 'test_helper' class MyPluginTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "Factories are supported" do assert_not_nil Factory end end File structure vendor - plugins - my_plugin - app - config - routes.rb - generators - my_plugin - some generator files.rb - lib - my_plugin.rb - my_plugin - my_plugin_lib_file.rb - rails - init.rb - Rakefile - tasks - my_plugin_tasks.rake - test - test_helper.rb - my_plugin_test.rb

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >