Search Results

Search found 13068 results on 523 pages for 'copy and paste'.

Page 358/523 | < Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >

  • asp.net search application index update help

    - by srinivasan
    hi im developing a simple search application( ASP.Net VB.Net) the index table is actually a hash table ll be stored in my file system. the search page ll open this in read mode n copy this to a hash table object n perform search. other update n delete functions will open this in write mode n update it. what should i ve to do to make this app correct that there shud not be any execption when multiple user accessing these things at the same time. wat do i ve to do to make this robust and error free. i want multiple users to access search page without any problem n the index updation also in a parallel manner thanks srinivasan

    Read the article

  • Jquery getJSON Not Working Cross Site

    - by CJ
    I have a piece of javascript that grabs JSON data. When executed locally everything seems to work fine. However, when I try accessing it from a different site, it doesn't work. Here's the script. $(function(){ var aT = new AjaxTest(); aT.getJson(); }); var AjaxTest = function() { this.ajaxUrl = "http://mydeveloperpage.com/sandbox/ajax_json_test/client_reciever.php"; this.getJson = function(){ $.getJSON(this.ajaxUrl, function(data){ $.each(data, function(i, piece){ alert(piece); }); }); } } You can find a copy of the exact same file at "http://mydeveloperpage.com/sandbox/ajax_json_test/". Any help would be greatly appreciated. Thanks!

    Read the article

  • Remove files from Bazaar

    - by Kristopher Ives
    I'm using Bazaar (bzr) to keep source code for a website updated, but we've ran into a problem when we remove files from version control. The files we are removing are ones we never intended to version to begin with. When this happens we use bzr rm --keep to remove the file from version control, but keep the file in the file system. Doing a bzr push or bzr pull results in the removed file(s) being removed on the other branches (other sites that use our code) We need a way to make sure that a bzr push or bzr pull doesn't actually remove those from the working copy. Anyone have any ideas?

    Read the article

  • Using a edit template without using Html.EditorFor()

    - by Mark Nijhof
    I have a date time picker combination in a edit template that can be used like Html.EditorFor(x = x.ETA) but now I want to use the same template somewhere where I don't have a model that contains a DateTime property. So I tried Html.Editor("DateWithTime", "Arrival") which uses the correct template, but doesn't assign a value to ViewData.ModelMetadata.PropertyName which is something that my template relies on. It sets the id of the textbox which is obviously important. Is there a way to render the template and assign a id value to the ViewData.ModelMetadata.PropertyName so I can re-use the logic in the template instead of having to copy it?

    Read the article

  • sql exception when transferring project from usb to c:\

    - by jello
    I'm working on a C# windows program with Visual Studio 2008. Usually, I work from school, directly on my usb drive. But when I copy the folder on my hard drive at home, an sql exception is unhandled whenever I try to write to the database. it is unhandled at the conn.Open(); line. here's the exception unhandled Database 'L:\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' already exists. Choose a different database name. Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase'. it's weird, because my connection string says |DataDirectory|, so it should work on any drive... here's my connection string: string connStr = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; what's going on here?

    Read the article

  • executing two functions with wshshell

    - by sushant
    i have two different functions (copy and zip) to b executed. can i do it with with a single wshshell script.i tried---- Dim WshShell, oExec,g,h h="D:\d" g="xcopy " & h & " " & "D:\y\ /E & cmd /c cd D:\c & D: & winzip32.exe -min -a D:\a" Set WshShell = CreateObject("WScript.Shell") Set oExec = WshShell.Exec(g) Do While oExec.Status = 0 WScript.Sleep 100 Loop WScript.Echo oExec.Status it dint work.though separate programs i.e g="xcopy " & h & " " & "D:\y\ /E" and g="cmd /c cd D:\d & D: & winzip32.exe -min -a D:\a" works. i am sorry for the formatting problem. any help is appreciated.

    Read the article

  • create table from another table in different database in sql server 2005

    - by Greg
    Hi, I have a database "temp" with table "A". I created new database "temp2". I want to copy table "A" from "temp" to a new table in "temp2" . I tried this statement but it says I have incorrect syntax, here is the statement: CREATE TABLE B IN 'temp2' AS (SELECT * FROM A IN 'temp'); Here is the error: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'IN'. Msg 156, Level 15, State 1, Line 3 Incorrect syntax near the keyword 'IN'. Anyone knows whats the problem? Thanks in advance, Greg

    Read the article

  • Latex: Extracting the sty files of all the used packages

    - by Zlatko
    Hi. So after writhing a large .tex file and using many packages I want to archive everything. Not just the .tex .jpg files but also the .sty files. This is because sometimes some options in the sty files are changed, and then I can't compile the file. The "problem" is that in using Ubuntu, I already installed all the packages in my system. I don't want to have to copy the manually. Is there a program that can do this automatically. Tnx.

    Read the article

  • DataSource for Tomcat web app, Spring and Hibernate

    - by EugeneP
    Web app runs on Tomcat. Datasource is configured with Spring configuration, and is used by Hibernate. If we cannot use JNDI, what would you suggest to use as a DataSource? org.springframework.jdbc.datasource.DriverManagerDataSource will be ok? It's not very good, but sincerely speaking, it can be used on production server, right? Just a bit of headache with too frequent connection reopening. Also, we can use BasicDataSource from Apache. It's much better of course, but here's the question. IF WE DON'T USE JNDI, THEN: If every instance of an app will create its own copy of a DataSource, and every DataSource can have 5 open connections, what do we get? Num_of_running_apps * Num_of_max_active_connections = max active open connection on a DB for this user? Second question: from the perspective of Hibernate, is there any difference about what datasource implementation is used? Will it work with no matter what datasource perfectly and in a stable way?

    Read the article

  • Visual Studio : Make files in a folder got to bin/debug and not bin/debug/folder

    - by CF_Maintainer
    Consider This: I have folder called \SQLCE35Dlls inside my solution. It has some dlls that are required for application to interact with a SQLCE database in a stand alone fashion [without sql server ce 35 install on the PC]. After a build, I want these files to go to bin/debug and not to bin/debug/SQLCE35Dlls/. Setting "Copy if Newer" creates the latter situation. I want the former. Is it possible to facilitate this or does this have to done as part of the installer script? [avoiding the solution of adding the dlls at the root level of the solution instead of inside a folder]. This is a Winforms project solution.

    Read the article

  • Casting a non-generic type to a generic one

    - by John Sheehan
    I've got this class: class Foo { public string Name { get; set; } } And this class class Foo<T> : Foo { public T Data { get; set; } } Here's what I want to do: public Foo<T> GetSome() { Foo foo = GetFoo(); Foo<T> foot = (Foo<T>)foo; foot.Data = GetData<T>(); return foot; } What's the easiest way to convert Foo to Foo<T>? I can't cast directly InvalidCastException) and I don't want to copy each property manually (in my actual use case, there's more than one property) if I don't have to. Is a user-defined type conversion the way to go?

    Read the article

  • Unaccounted for database size

    - by Nazadus
    I currently have a database that is 20GB in size. I've run a few scripts which show on each tables size (and other incredibly useful information such as index stuff) and the biggest table is 1.1 million records which takes up 150MB of data. We have less than 50 tables most of which take up less than 1MB of data. After looking at the size of each table I don't understand why the database shouldn't be 1GB in size after a shrink. The amount of available free space that SqlServer (2005) reports is 0%. The log mode is set to simple. At this point my main concern is I feel like I have 19GB of unaccounted for used space. Is there something else I should look at? Normally I wouldn't care and would make this a passive research project except this particular situation calls for us to do a backup and restore on a weekly basis to put a copy on a satellite (which has no internet, so it must be done manually). I'd much rather copy 1GB (or even if it were down to 5GB!) than 20GB of data each week. sp_spaceused reports the following: Navigator-Production 19184.56 MB 3.02 MB And the second part of it: 19640872 KB 19512112 KB 108184 KB 20576 KB while I've found a few other scripts (such as the one from two of the server database size questions here, they all report the same information either found above or below). The script I am using is from SqlTeam. Here is the header info: * BigTables.sql * Bill Graziano (SQLTeam.com) * graz@<email removed> * v1.11 The top few tables show this (table, rows, reserved space, data, index, unused, etc): Activity 1143639 131 MB 89 MB 41768 KB 1648 KB 46% 1% EventAttendance 883261 90 MB 58 MB 32264 KB 328 KB 54% 0% Person 113437 31 MB 15 MB 15752 KB 912 KB 103% 3% HouseholdMember 113443 12 MB 6 MB 5224 KB 432 KB 82% 4% PostalAddress 48870 8 MB 6 MB 2200 KB 280 KB 36% 3% The rest of the tables are either the same in size or smaller. No more than 50 tables. Update 1: - All tables use unique identifiers. Usually an int incremented by 1 per row. I've also re-indexed everything. I ran the dbcc shrink command as well as updating the usage before and after. And over and over. An interesting thing I found is that when I restarted the server and confirmed no one was using it (and no maintenance procs are running, this is a very new application -- under a week old) and when I went to run the shrink, every now and then it would say something about data changed. Googling yielded too few useful answers with the obvious not applying (it was 1am and I disconnected everyone, so it seems impossible that was really the case). The data was migrated via C# code which basically looked at another server and brought things over. The quantity of deletes, at this point in time, are probably under 50k in rows. Even if those rows were the biggest rows, that wouldn't be more than 100M I would imagine. When I go to shrink via the GUI it reports 0% available to shrink, indicating that I've already gotten it as small as it thinks it can go. Update 2: sp_spaceused 'Activity' yields this (which seems right on the money): Activity 1143639 134488 KB 91072 KB 41768 KB 1648 KB Fill factor was 90. All primary keys are ints. Here is the command I used to 'updateusage': DBCC UPDATEUSAGE(0); Update 3: Per Edosoft's request: Image 111975 2407773 19262184 It appears as though the image table believes it's the 19GB portion. I don't understand what this means though. Is it really 19GB or is it misrepresented? Update 4: Talking to a co-worker and I found out that it's because of the pages, as someone else here has also state the potential for that. The only index on the image table is a clustered PK. Is this something I can fix or do I just have to deal with it? The regular script shows the Image table to be 6MB in size. Update 5: I think I'm just going to have to deal with it after further research. The images have been resized to be roughly 2-5KB each and on a normal file system doesn't consume much space but on SqlServer it seems to consume considerably more. The real answer, in the long run, will likely be separating that table in to another partition or something similar.

    Read the article

  • Flex 4 - Using .pfm/.pfb fonts

    - by Zed-K
    Hi everyone, I just switched to Flash Builder 4 & Flex 4 SDK, and it seems it's no longer possible to use a .pfm/.pfb font, either by embedding it or using it as a system font. I keep getting error messages, and Google can't find anybody having the same issue. I tried several methods: - copy/pasting the [Embed] statement which was working using Flex 3 SDK - installing the font and then try to simply call it by its name in a CSS declaration without embedding it ; seems to work for every .ttf and .otf system fonts, but not for .pfm/.pfm ones - using a Flash-generated swf which embeds the font So far none of these seems to work. Has anybody got an idea on how to achieve this? I actually don't care using a system font without embedding it as long as it works. I'll be really grateful if somebody could help me on this, I'm totally stuck and cannot use another font instead.

    Read the article

  • django media url is not resolved in 500 internal server error template

    - by Tom Tom
    Hi, I'm using a 500.html template for my app, which is an identical copy of the 404.html with some minor text changes. Interestingly the {{ media_url }} context variable will not be resolved by the server if the 500.html is presented (e.g. when I force an internal server error), resulting in a page without any css loaded. An easy way to circumvent this would be to hardcode the links to the css, but I m just curious why the media_url is not resolved. Probably it is because the server encounters a internal server error and that leads to context variables not any more available!?

    Read the article

  • Django foreign keys cascade deleting and "related_name" parameter (bug?)

    - by Wiseman
    In this topic I found a good way to prevent cascade deleting of relating objects, when it's not neccessary. class Factures(models.Model): idFacture = models.IntegerField(primary_key=True) idLettrage = models.ForeignKey('Lettrage', db_column='idLettrage', null=True, blank=True) class Paiements(models.Model): idPaiement = models.IntegerField(primary_key=True) idLettrage = models.ForeignKey('Lettrage', db_column='idLettrage', null=True, blank=True) class Lettrage(models.Model): idLettrage = models.IntegerField(primary_key=True) def delete(self): """Dettaches factures and paiements from current lettre before deleting""" self.factures_set.clear() self.paiements_set.clear() super(Lettrage, self).delete() But this method seems to fail when we are using ForeignKey field with "related_name" parameter. As it seems to me, "clear()" method works fine and saves the instance of "deassociated" object. But then, while deleting, django uses another memorized copy of this very object and since it's still associated with object we are trying to delete - whooooosh! ...bye-bye to relatives :) Database was arcitectured before me, and in somewhat odd way, so I can't escape these "related_names" in reasonable amount of time. Anybody heard about workaround for such a trouble?

    Read the article

  • How can I get this code involving unique_ptr to compile?!

    - by Neil G
    #include <vector> #include <memory> using namespace std; class A { public: A(): i(new int) {} A(A const& a) = delete; A(A &&a): i(move(a.i)) {} unique_ptr<int> i; }; class AGroup { public: void AddA(A &&a) { a_.emplace_back(move(a)); } vector<A> a_; }; int main() { AGroup ag; ag.AddA(A()); return 0; } does not compile... (says that unique_ptr's copy constructor is deleted) I tried replacing move with forward. Not sure if I did it right, but it didn't work for me.

    Read the article

  • SVN best practice - checking out root folder

    - by Stephen Dolier
    Hi all, quick question about svn checkout best practice. Once the structure of a repository is set up, ie trunk, branches, tags, is it normal to have the root checked out to our local machines. Or should you only check out the trunk if that's what you are working on or a branch if we so choose to create one. The reason i ask is that every time someone creates a branch or tag we all get a copy when we do an update. btw, we're recently migrated from vss.

    Read the article

  • Cloning objects in C#

    - by Alison
    I want to do something like... myObject myObj = GetmyObj()//create and fill a new object myObject newObj = myObj.Clone(); ...and then make changes to the new object that are not reflected in the original object. I don't often need this functionality so when it's been necessary I've resorted to creating a new object and then copying each property individually but it always leaves me with the feeling that there is a better/more elegant way of handling the situation. How can I clone/deep copy an object so that the cloned object can be modified without any changes being reflected in the original object?

    Read the article

  • Error calling webservice using JSONP + jquery with IE on remote domain

    - by Jay Heavner
    I have a .Net webservice on my app server that returns data formatted as JSONP. I have an HTML test client on that server which works fine using IE, Firefox, & Chrome. If I copy the same HTML to my workstation or deploy to my webserver it works with Firefox & Chrome but in IE I'm getting two javascript errors. Message: Object doesn't support this property or method Line: 1 Char: 1 Code: 0 URI: http://mydomain/WebServices/LyrisProxy/Services/Lyris/JSONP/Lyris.asmx/AddUser?lyrisInstance="1"&email="[email protected]"&fullName="My Name"&lyrisList="listname"&format=json&callback=jsonp1274109819864&_=1274109829665 Message: Member not found. Line: 59 Char: 209 Code: 0 URI: http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js I'm kind of at loss as to what to do to fix this.

    Read the article

  • Idiom vs. pattern

    - by Roger Pate
    In the context of programming, how do idioms differ from patterns? I use the terms interchangeably and normally follow the most popular way I've heard something called, or the way it was called most recently in the current conversation, e.g. "the copy-swap idiom" and "singleton pattern". The best difference I can come up with is code which is meant to be copied almost literally is more often called pattern while code meant to be taken less literally is more often called idiom, but such isn't even always true. This doesn't seem to be more than a stylistic or buzzword difference. Does that match your perception of how the terms are used? Is there a semantic difference?

    Read the article

  • How to add correct cancellation when downloading a file with the example in the samples of the new P

    - by Mike
    Hello everybody, I have downloaded the last samples of the Parallel Programming team, and I don't succeed in adding correctly the possibility to cancel the download of a file. Here is the code I ended to have: var wreq = (HttpWebRequest)WebRequest.Create(uri); // Fire start event DownloadStarted(this, new DownloadStartedEventArgs(remoteFilePath)); long totalBytes = 0; wreq.DownloadDataInFileAsync(tmpLocalFile, cancellationTokenSource.Token, allowResume, totalBytesAction => { totalBytes = totalBytesAction; }, readBytes => { Log.Debug("Progression : {0} / {1} => {2}%", readBytes, totalBytes, 100 * (double)readBytes / totalBytes); DownloadProgress(this, new DownloadProgressEventArgs(remoteFilePath, readBytes, totalBytes, (int)(100 * readBytes / totalBytes))); }) .ContinueWith( (antecedent ) => { if (antecedent.IsFaulted) Log.Debug(antecedent.Exception.Message); //Fire end event SetEndDownload(antecedent.IsCanceled, antecedent.Exception, tmpLocalFile, 0); }, cancellationTokenSource.Token); I want to fire an end event after the download is finished, hence the ContinueWith. I slightly changed the code of the samples to add the CancellationToken and the 2 delegates to get the size of the file to download, and the progression of the download: return webRequest.GetResponseAsync() .ContinueWith(response => { if (totalBytesAction != null) totalBytesAction(response.Result.ContentLength); response.Result.GetResponseStream().WriteAllBytesAsync(filePath, ct, resumeDownload, progressAction).Wait(ct); }, ct); I had to add the call to the Wait function, because if I don't, the method exits and the end event is fired too early. Here are the modified method extensions (lot of code, apologies :p) public static Task WriteAllBytesAsync(this Stream stream, string filePath, CancellationToken ct, bool resumeDownload = false, Action<long> progressAction = null) { if (stream == null) throw new ArgumentNullException("stream"); // Copy from the source stream to the memory stream and return the copied data return stream.CopyStreamToFileAsync(filePath, ct, resumeDownload, progressAction); } public static Task CopyStreamToFileAsync(this Stream source, string destinationPath, CancellationToken ct, bool resumeDownload = false, Action<long> progressAction = null) { if (source == null) throw new ArgumentNullException("source"); if (destinationPath == null) throw new ArgumentNullException("destinationPath"); // Open the output file for writing var destinationStream = FileAsync.OpenWrite(destinationPath); // Copy the source to the destination stream, then close the output file. return CopyStreamToStreamAsync(source, destinationStream, ct, progressAction).ContinueWith(t => { var e = t.Exception; destinationStream.Close(); if (e != null) throw e; }, ct, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Current); } public static Task CopyStreamToStreamAsync(this Stream source, Stream destination, CancellationToken ct, Action<long> progressAction = null) { if (source == null) throw new ArgumentNullException("source"); if (destination == null) throw new ArgumentNullException("destination"); return Task.Factory.Iterate(CopyStreamIterator(source, destination, ct, progressAction)); } private static IEnumerable<Task> CopyStreamIterator(Stream input, Stream output, CancellationToken ct, Action<long> progressAction = null) { // Create two buffers. One will be used for the current read operation and one for the current // write operation. We'll continually swap back and forth between them. byte[][] buffers = new byte[2][] { new byte[BUFFER_SIZE], new byte[BUFFER_SIZE] }; int filledBufferNum = 0; Task writeTask = null; int readBytes = 0; // Until there's no more data to be read or cancellation while (true) { ct.ThrowIfCancellationRequested(); // Read from the input asynchronously var readTask = input.ReadAsync(buffers[filledBufferNum], 0, buffers[filledBufferNum].Length); // If we have no pending write operations, just yield until the read operation has // completed. If we have both a pending read and a pending write, yield until both the read // and the write have completed. yield return writeTask == null ? readTask : Task.Factory.ContinueWhenAll(new[] { readTask, writeTask }, tasks => tasks.PropagateExceptions()); // If no data was read, nothing more to do. if (readTask.Result <= 0) break; readBytes += readTask.Result; if (progressAction != null) progressAction(readBytes); // Otherwise, write the written data out to the file writeTask = output.WriteAsync(buffers[filledBufferNum], 0, readTask.Result); // Swap buffers filledBufferNum ^= 1; } } So basically, at the end of the chain of called methods, I let the CancellationToken throw an OperationCanceledException if a Cancel has been requested. What I hoped was to get IsFaulted == true in the appealing code and to fire the end event with the canceled flags and the correct exception. But what I get is an unhandled exception on the line response.Result.GetResponseStream().WriteAllBytesAsync(filePath, ct, resumeDownload, progressAction).Wait(ct); telling me that I don't catch an AggregateException. I've tried various things, but I don't succeed to make the whole thing work properly. Does anyone of you have played enough with that library and may help me? Thanks in advance Mike

    Read the article

  • How to package .Net framework in Visual Studio project?

    - by raj.tiwari
    I have created a C#/.Net application using visual studio. I have also created an installer project that puts out two files: An MSI file Setup.exe file In my installer project properties I have setup .Net 3.5 as a prerequisite. What I would like my installer to do as as follows: Put out a single file (MSI/exe/whatever) that also includes .Net framework prerequisite The installer should check whether .Net framework is installed on the target machine. If not, it should install it from its own bundled copy. Right now my installer sends people to the web for getting .Net. This is not the user experience I want. Thanks for your help. -Raj

    Read the article

  • bjam wih visual studio 2010

    - by ra170
    ok, so I ran into problems with Boost under visual studio 2010, so I decided to rebuild it with bjam: such as: bjam --toolset=msvc-10.0 --build-type=complete After running bjam (successfully?) it created a new directory under boost_1_42_0 called: bin.v2 Inside bin.v2 is directory called: libs. Two issues: 1. there's lot less libs under that new directory (about 13), the old directory libs has 88. Is it supposed to be like that or did something fail? 2. the structure is somewhat different too. What do I do with this exactly? Meaning, do I copy it over to the original libs, delete the old libs, try rebulding it with different flags?

    Read the article

  • Some problem running NUnit

    - by prosseek
    I have NUnit installed at this directory. C:\Program Files\NUnit 2.5.5\bin\net-2.0 When I try to run my unit test (mut.dll) in some random directory. I get the following error. I have to copy the mut.dll under the NUnit directory in order to run it. ProcessModel: Default DomainUsage: Single Execution Runtime: net-2.0 Could not load file or assembly 'nunit.framework, Version=2.5.5.10112, Culture=n eutral, PublicKeyToken=96d09a1eb7f44a77' or one of its dependencies. The system cannot find the file specified. What's wrong? Is there anything that I have to configure to run NUNit under any directory?

    Read the article

  • Smart pointers and polymorphism

    - by qwerty
    hello. I implemented reference counting pointers (called SP in the example) and im having problems with polymorphism which i think i shouldn't have. In the following code: SP<BaseClass> foo() { // Some logic... SP<DerivedClass> retPtr = new DerivedClass(); return retPtr; } DerivedClass inherits from BaseClass. With normal pointers this should have worked, but with the smart pointers it says "cannot convert from 'SP<T>' to 'const SP<T>&" and i think it refers to the copy constructor of the smart pointer. How to i allow this kind of polymorphism with reference counting pointer? I'd appreciate code samples cause obviously im doing something wrong here if im having this problem. Thanks! :) [p.s., plz don't tell me to use standart liberary with smart pointers cuz that's impossible at this moment.]

    Read the article

< Previous Page | 354 355 356 357 358 359 360 361 362 363 364 365  | Next Page >