Search Results

Search found 11915 results on 477 pages for 'copy'.

Page 381/477 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Malloc corrupting already malloc'd memory in C

    - by Kyte
    I'm currently helping a friend debug a program of his, which includes linked lists. His list structure is pretty simple: typedef struct nodo{ int cantUnos; char* numBin; struct nodo* sig; }Nodo; We've got the following code snippet: void insNodo(Nodo** lista, char* auxBin, int auxCantUnos){ printf("*******Insertando\n"); int i; if (*lista) printf("DecInt*%p->%p\n", *lista, (*lista)->sig); Nodo* insert = (Nodo*)malloc(sizeof(Nodo*)); if (*lista) printf("Malloc*%p->%p\n", *lista, (*lista)->sig); insert->cantUnos = auxCantUnos; insert->numBin = (char*)malloc(strlen(auxBin)*sizeof(char)); for(i=0 ; i<strlen(auxBin) ; i++) insert->numBin[i] = auxBin[i]; insert-numBin[i] = '\0'; insert-sig = NULL; Nodo* aux; [etc] (The lines with extra indentation were my addition for debug purposes) This yields me the following: *******Insertando DecInt*00341098->00000000 Malloc*00341098->2832B6EE (*lista)-sig is previously and deliberately set as NULL, which checks out until here, and fixed a potential buffer overflow (he'd forgotten to copy the NULL-terminator in insert-numBin). I can't think of a single reason why'd that happen, nor I've got any idea on what else should I provide as further info. (Compiling on latest stable MinGW under fully-patched Windows 7, friend's using MinGW under Windows XP. On my machine, at least, in only happens when GDB's not attached.) Any ideas? Suggestions? Possible exorcism techniques? (Current hack is copying the sig pointer to a temp variable and restore it after malloc. It breaks anyways. Turns out the 2nd malloc corrupts it too. Interestingly enough, it resets sig to the exact same value as the first one).

    Read the article

  • How to get stack trace information for logging in production when using the GAC

    - by Jonathan Parker
    I would like to get stack trace (file name and line number) information for logging exceptions etc. in a production environment. The DLLs are installed in the GAC. Is there any way to do this? This article says about putting PDB files in the GAC: You can spot these easily because they will say you need to copy the debug symbols (.pdb file) to the GAC. In and of itself, that will not work. I know this article refers to debugging with VS but I thought it might apply to logging the stacktrace also. I've followed the instructions for the answer to this question except for unchecking Optimize code which they said was optional. I copied the dlls and pdbs into the GAC but I'm still not getting the stack trace information. Here's what I get in the log file for the stack trace: OnAuthenticate at offset 161 in file:line:column <filename unknown>:0:0 ValidateUser at offset 427 in file:line:column <filename unknown>:0:0 LogException at offset 218 in file:line:column <filename unknown>:0:0 I'm using NLog. My NLog layout is: layout="${date:format=s}|${level}|${callsite}|${identity}|${message}|${stacktrace:format=Raw}" ${stacktrace:format=Raw} being the relevant part.

    Read the article

  • rails foreman does not load all my services on start

    - by Rubytastic
    Rails foreman does not load all my services defined in Procfile. Procfile.rb: redis: redis-server resque: bundle exec rake resque:start &&> log/resque_worker_queue.log privpub: bundle exec rackup private_pub.ru -s thin -E production & &> log/private_pub.log sunspot: bundle exec rake sunspot:solr:run I always have to manually start all of them by copy paste the commands in terminal foreman start does not work, what am i missing? This is foreman output: 12:35:40 privpub.1 | process terminated 12:35:40 system | sending SIGTERM to all processes 12:35:40 system | sending SIGTERM to pid 4375 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 # Received SIGTERM, scheduling shutdown... 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 # User requested shutdown... 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 * Saving the final RDB snapshot before exiting. 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 * DB saved on disk 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 # Redis is now ready to exit, bye bye... 12:35:40 system | sending SIGTERM to pid 4376 12:35:40 resque.1 | rake aborted! 12:35:40 resque.1 | SIGTERM 12:35:40 resque.1 | 12:35:40 resque.1 | (See full trace by running task with --trace) 12:35:40 system | sending SIGTERM to pid 4378 12:35:40 sunspot.1 | rake aborted! 12:35:40 sunspot.1 | SIGTERM 12:35:40 sunspot.1 | 12:35:40 sunspot.1 | (See full trace by running task with --trace) 12:35:40 sunspot.1 | process terminated 12:35:40 resque.1 | process terminated 12:35:40 redis.1 | process terminated

    Read the article

  • C#, Asp.net Uploading files to file server...

    - by Imcl
    Using the link below, I wrote a code for my application. I am not able to get it right though, Please refer the link and help me ot with it... http://stackoverflow.com/questions/263518/c-uploading-files-to-file-server The following is my code:- protected void Button1_Click(object sender, EventArgs e) { filePath = FileUpload1.FileName; try { WebClient client = new WebClient(); NetworkCredential nc = new NetworkCredential(uName, password); Uri addy = new Uri("\\\\192.168.1.3\\upload\\"); client.Credentials = nc; byte[] arrReturn = client.UploadFile(addy, filePath); arrReturn = client.UploadFile(addy, filePath); Console.WriteLine(arrReturn.ToString()); } catch (Exception ex) { Console.WriteLine(ex.Message); } } I also used:- File.Copy(filePath, "\\192.168.1.3\upload\"); The following line doesnt execute... byte[] arrReturn = client.UploadFile(addy, filePath); tried changing it to:- byte[] arrReturn = client.UploadFile("\\192.168.1.3\upload\", filePath); IT still doesnt work...Any solution to it?? I basically want to transfer a file from the client to the file storage server without actually loggin into the server so that the client cannot access the storage location on the server directly...

    Read the article

  • C++ design question on traversing binary trees

    - by user231536
    I have a binary tree T which I would like to copy to another tree. Suppose I have a visit method that gets evaluated at every node: struct visit { virtual void operator() (node* n)=0; }; and I have a visitor algorithm void visitor(node* t, visit& v) { //do a preorder traversal using stack or recursion if (!t) return; v(t); visitor(t->left, v); visitor(t->right, v); } I have 2 questions: I settled on using the functor based approach because I see that boost graph does this (vertex visitors). Also I tend to repeat the same code to traverse the tree and do different things at each node. Is this a good design to get rid of duplicated code? What other alternative designs are there? How do I use this to create a new binary tree from an existing one? I can keep a stack on the visit functor if I want, but it gets tied to the algorithm in visitor. How would I incorporate postorder traversals here ? Another functor class?

    Read the article

  • Improve speed of own debug visualizer for Delphi 2010

    - by netcodecz
    I wrote Delphi debug visualizer for TDataSet to display values of current row, source + screenshot: http://delphi.netcode.cz/text/tdataset-debug-visualizer.aspx . Working good, but very slow. I did some optimalization (how to get fieldnames) but still for only 20 fields takes 10 seconds to show - very bad. Main problem seems to be slow IOTAThread90.Evaluate used by main code shown below, this procedure cost most of time, line with ** about 80% time. FExpression is name of TDataset in code. procedure TDataSetViewerFrame.mFillData; var iCount: Integer; I: Integer; // sw: TStopwatch; s: string; begin // sw := TStopwatch.StartNew; iCount := StrToIntDef(Evaluate(FExpression+'.Fields.Count'), 0); for I := 0 to iCount - 1 do begin s:= s + Format('%s.Fields[%d].FieldName+'',''+', [FExpression, I]); // FFields.Add(Evaluate(Format('%s.Fields[%d].FieldName', [FExpression, I]))); FValues.Add(Evaluate(Format('%s.Fields[%d].Value', [FExpression, I]))); //** end; if s<> '' then Delete(s, length(s)-4, 5); s := Evaluate(s); s:= Copy(s, 2, Length(s) -2); FFields.CommaText := s; { sw.Stop; s := sw.Elapsed; Application.MessageBox(Pchar(s), '');} end; Now I have no idea how to improve performance.

    Read the article

  • Source Control - XCode - Visual Studio 2005/2008 / 2010

    - by Mick Walker
    My apologies if this has been asked before, I wasnt quite sure if this question should be asked on a programming forum, as it more relates to programming environment than a particular technology, so please accept my (double) appologies if I am posting this in the wrong place, my logic in this case was if it effects the code I write, then this is the place for it. At home, I do a lot of my development on a Mac Pro, I do development for the Mac, iPhone and Windows on this machine (Xcode & Visual Studio - (multiple versions installed in bootcamp, but generally I run it via Parallels)). When visiting a client, I have a similar setup, but on my MacBook Pro. What I want is a source control solution to install on the Mac Pro, that will support both XCode and multiple versions of visual studio, so that when I visit a client, I can simply grab the latest copy from source control via the MacBook Pro. Whilst visiting the client, he / she may suggest changes, and minor ones I would tend to make on site, so I need the ability to merge any modified code back into the trunk of the project / solution when I return home. At the moment, I am using no source control at all, and rely on simply coping folders and overwriting them when I return from a client- thats my 'merge'!!! I was wondering if anyone had any ideas of a source provider I could use, which would support both Windows and Mac development environments, and is cheap (free would be better).

    Read the article

  • Python sudoku programming

    - by trevor
    I need your help on this. I have this program and I must finish it. It's missing 3 parts. Here is the program I'm working with: import copy def display(A): if A: for i in range(9): for j in range(9): if type(A[i][j]) == type([]): print A[i][j][0], else: print A[i][j], print print else: print A def has_conflict(A): for i in range(9): for j in range(9): for (x,y) in get_neighbors(i,j): if len(A[i][j])==1 and A[i][j]==A[x][y]: return True return False # HERE ARE THE PARTS THAT REQUIRE HELP!!!! def get_neighbors(x,y): return [] def update(A, i, j, value): return [] def solve(A): return [] # ENDS PARTS THAT REQUIRE HELP!!!! A = [] infile = open('puzzle1.txt', 'r') for i in range(9): A += [[]] for j in range(9): num = int(infile.read(2)) if num: A[i] += [[num]] else: A[i] += [[1,2,3,4,5,6,7,8,9]] for i in range(9): for j in range(9): if len(A[i][j])==1: A = update(A, i,j, A[i][j][0]) if A==[]: break if A==[]: break if A<>[]: A = solve(A) display(A) I need to solve the stuff formerly in bold letters, now explicitly marked in the code, specifically - get_neighbors(): - update(): - solve(): Thank you for your time and help.

    Read the article

  • Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format.

    - by Sam M
    I have a wcf Service application in VS2010.My local machine is a 32 bit OS where as the server is a 64 bit. There are around 6 services in my solution. Im successfully able to host the application on IIS on my local machine.And it works fine. But when i try host that service application on Server i gets the below error Could not load file or assembly 'GMap.NET.Core' or one of its dependencies. An attempt was made to load a program with an incorrect format. I do have reference added in my solution for GMap.NET.Core . I have tried to set the properties in my solution to Any CPU . Also in the application pool i have set the Enable 32-Bit Application to True. i have also set the Copy Local to TRUE in my solution before publishing. When i run the source on through my solution i dont get any error and the solution is built successfully. What else can i try to get my services successfully hosted on the Server and should be accessed through my application.

    Read the article

  • How do you handle key down events on Android? I am having issues making it work.

    - by user279112
    For an Android program, I am having trouble handling key down and key up events, and the problem I am having with them can almost certainly be generalized to any sort of user input event. I am using Lunar Lander as one of my main learning devices as I make my first meaningful program, and I noticed that it was using onKeyDown as an overridden method to receive key down events, and it would call one of their more original methods doKeyDown. But when I tried to implement a very small version of my own onKeyDown overide and the actual handler that it calls, it didn't work. I would probably copy and paste my implementations of those two methods, but that doesn't seem to be the problem. You see, I ran the debugger and noticed that they were not getting called - at all. The same goes for my implementations of onKeyUp and the handler that it calls. Something is a little weird here, and when I tried to look at the Android documentation for it, that didn't help at all. I thought that if you had an overide for onKeyDown, then when a key was pressed during execution of the program, onKeyDown would be called as soon as reasonably possible. End of story. But apparently there's something more to it. Apparently you have to do something else somewhere - possibly in the XML when defining the layout or something - to make it work. But I do not know what, and I could not find what in their documentation. What's the secret to this? Thanks!

    Read the article

  • help in saving data in GP original table

    - by Rahul khanna
    Hi all, I am new to microsoft great plains technology. In this post my concern is to save the scroll window data in original sql table. I am selecting the values from lookup window and displaying in scroll window and saving it to temp table. While i am running temp table i see data is saved there, my motto is to save the data from temp table to original table while clicking the save button. for that i have written... get first table is_customer_temp; while err() = OKAY do copy from table is_customer_temp to table is_customer; save table is_customer; get next table is_customer_temp; end while; but while i see the back end nothing is stored in original table. Pls somebody help me... And 2nd problem is i want to delete the selected row from scroll window using checkbox. If i dont use check box and simply want to delete row i can do this simple way....write delete code scroll window line delete event and call that script in row delete button event. It will delete the particular row from scroll. But how can i do if i use checkbox in scroll window...Pls somebody help me.. Thanks, Rahul

    Read the article

  • Singleton Roles in Moose

    - by mjn12
    I am attempting to write a singleton role using Perl and Moose. I understand a MooseX::Singleton module is available but there is always resistance when requiring another CPAN module for our project. After trying this and having a little trouble I would like to understand WHY my method is not working. The singleton role I have written is as follows: package Singleton; use Moose::Role; my $_singleInstance; around 'new' => sub { my $orig = shift; my $class = shift; if (not defined $_singleInstance ){ $_singleInstance = $class->$orig(@_); } return $_singleInstance; }; sub getInstance { return __PACKAGE__->new(); } 1; This appears to work find when only one class uses the singleton role. However when two classes (ClassA and ClassB for example) both consume the Singleton role it appears as they are both referring to a shared $_singleInstance variable. If I call ClassA-getInstance it returns a reference to a ClassA object. If I call ClassB-getInstance sometime later in the same script it returns a reference to an object of type ClassA (even though I clearly called the getInstance method for ClassB). If I dont use a role and actually copy and paste the code from the Singleton role into ClassA and ClassB it appears to work fine. Whats going on here?

    Read the article

  • Adding Functions to an Implementation of Vector

    - by Meursault
    I have this implementation of vector that I've been working on for a few days using examples from a textbook: #include <iostream> #include <string> #include <cassert> #include <algorithm> #include <cstring> // Vector.h using namespace std; template <class T> class Vector { public: typedef T * iterator; Vector(); Vector(unsigned int size); Vector(unsigned int size, const T & initial); Vector(const Vector<T> & v); // copy constructor ~Vector(); unsigned int capacity() const; // return capacity of vector (in elements) unsigned int size() const; // return the number of elements in the vector bool empty() const; iterator begin(); // return an iterator pointing to the first element iterator end(); // return an iterator pointing to one past the last element T & front(); // return a reference to the first element T & back(); // return a reference to the last element void push_back(const T & value); // add a new element void pop_back(); // remove the last element void reserve(unsigned int capacity); // adjust capacity void resize(unsigned int size); // adjust size void erase(unsigned int size); // deletes an element from the vector T & operator[](unsigned int index); // return reference to numbered element Vector<T> & operator=(const Vector<T> &); private: unsigned int my_size; unsigned int my_capacity; T * buffer; }; template<class T>// Vector<T>::Vector() { my_capacity = 0; my_size = 0; buffer = 0; } template<class T> Vector<T>::Vector(const Vector<T> & v) { my_size = v.my_size; my_capacity = v.my_capacity; buffer = new T[my_size]; for (int i = 0; i < my_size; i++) buffer[i] = v.buffer[i]; } template<class T>// Vector<T>::Vector(unsigned int size) { my_capacity = size; my_size = size; buffer = new T[size]; } template<class T>// Vector<T>::Vector(unsigned int size, const T & initial) { my_size = size; //added = size my_capacity = size; buffer = new T [size]; for (int i = 0; i < size; i++) buffer[i] = initial; } template<class T>// Vector<T> & Vector<T>::operator = (const Vector<T> & v) { delete[ ] buffer; my_size = v.my_size; my_capacity = v.my_capacity; buffer = new T [my_size]; for (int i = 0; i < my_size; i++) buffer[i] = v.buffer[i]; return *this; } template<class T>// typename Vector<T>::iterator Vector<T>::begin() { return buffer; } template<class T>// typename Vector<T>::iterator Vector<T>::end() { return buffer + size(); } template<class T>// T& Vector<T>::Vector<T>::front() { return buffer[0]; } template<class T>// T& Vector<T>::Vector<T>::back() { return buffer[size - 1]; } template<class T> void Vector<T>::push_back(const T & v) { if (my_size >= my_capacity) reserve(my_capacity +5); buffer [my_size++] = v; } template<class T>// void Vector<T>::pop_back() { my_size--; } template<class T>// void Vector<T>::reserve(unsigned int capacity) { if(buffer == 0) { my_size = 0; my_capacity = 0; } if (capacity <= my_capacity) return; T * new_buffer = new T [capacity]; assert(new_buffer); copy (buffer, buffer + my_size, new_buffer); my_capacity = capacity; delete[] buffer; buffer = new_buffer; } template<class T>// unsigned int Vector<T>::size()const { return my_size; } template<class T>// void Vector<T>::resize(unsigned int size) { reserve(size); my_size = size; } template<class T>// T& Vector<T>::operator[](unsigned int index) { return buffer[index]; } template<class T>// unsigned int Vector<T>::capacity()const { return my_capacity; } template<class T>// Vector<T>::~Vector() { delete[]buffer; } template<class T> void Vector<T>::erase(unsigned int size) { } int main() { Vector<int> v; v.reserve(2); assert(v.capacity() == 2); Vector<string> v1(2); assert(v1.capacity() == 2); assert(v1.size() == 2); assert(v1[0] == ""); assert(v1[1] == ""); v1[0] = "hi"; assert(v1[0] == "hi"); Vector<int> v2(2, 7); assert(v2[1] == 7); Vector<int> v10(v2); assert(v10[1] == 7); Vector<string> v3(2, "hello"); assert(v3.size() == 2); assert(v3.capacity() == 2); assert(v3[0] == "hello"); assert(v3[1] == "hello"); v3.resize(1); assert(v3.size() == 1); assert(v3[0] == "hello"); Vector<string> v4 = v3; assert(v4.size() == 1); assert(v4[0] == v3[0]); v3[0] = "test"; assert(v4[0] != v3[0]); assert(v4[0] == "hello"); v3.pop_back(); assert(v3.size() == 0); Vector<int> v5(7, 9); Vector<int>::iterator it = v5.begin(); while (it != v5.end()) { assert(*it == 9); ++it; } Vector<int> v6; v6.push_back(100); assert(v6.size() == 1); assert(v6[0] == 100); v6.push_back(101); assert(v6.size() == 2); assert(v6[0] == 100); v6.push_back(101); cout << "SUCCESS\n"; } So far it works pretty well, but I want to add a couple of functions to it that I can't find examples for, a SWAP function that would look at two elements of the vector and switch their values and and an ERASE function that would delete a specific value or range of values in the vector. How should I begin implementing the two extra functions?

    Read the article

  • Changing App.config at Runtime

    - by born to hula
    I'm writing a test winforms / C# / .NET 3.5 application for the system we're developing and we fell in the need to switch between .config files at runtime, but this is turning out to be a nightmare. Here's the scene: the Winforms application is aimed at testing a WebApp, divided into 5 subsystems. The test proccess works with messages being sent between the subsystems, and for this proccess to be sucessful each subsystem got to have its own .config file. For my Test Application I wrote 5 separate configuration files. I wish I was able to switch between these 5 files during runtime, but the problem is: I can programatically edit the application .config file inumerous times, but these changes will only take effect once. I've been searching a long time for a form to address this problem but I still wasn't sucessful. I know the problem definition may be a bit confusing but I would really appreciate it if someone helped me. Thanks in advance! --- UPDATE 01-06-10 --- There's something I didn't mention before. Originally, our system is a Web Application with WCF calls between each subsystem. For performance testing reasons (we're using ANTS 4), we had to create a local copy of the assemblies and reference them from the test project. It may sound a bit wrong, but we couldn't find a satisfying way to measure performance of a remote application. --- End Update --- Here's what I'm doing: public void UpdateAppSettings(string key, string value) { XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); foreach (XmlElement item in xmlDoc.DocumentElement) { foreach (XmlNode node in item.ChildNodes) { if (node.Name == key) { node.Attributes[0].Value = value; break; } } } xmlDoc.Save(AppDomain.CurrentDomain.SetupInformation.ConfigurationFile); System.Configuration.ConfigurationManager.RefreshSection("section/subSection"); }

    Read the article

  • Bind members of different classes

    - by 7vies
    In a C++ program I have two classes (structs) like struct A { int x; double y; // other members }; struct B { int x; double y2; // other members }; I'd like to somehow "bind" the corresponding members, e.g. A::x to B::x and A::y to B::y2. By "bind" I mean ability to obtain a reference to the bound variable, for example given a member of class A I could assign it to the value of the corresponding B member. Once I have such bind, I'd like to build a bind table or something similar which I could iterate over. This would allow, for example, copying the corresponding fields from A a; to B b; like CopyBound(a, b, bind_table);, but probably also doing some other things not limited to Copy interface. The problem with this bind_table is that I want static typing and the bind_table would have to contain different types in this case. For example, a table of pointers to class members would contain &A::x and &A::y, but they are of different type, so I cannot just put them say into an array. Any ideas how this can be conveniently implemented, having as much compile-time type checking as possible?

    Read the article

  • Powershell / .Net: Get a reference to an object returned by a method

    - by Dan Menes
    I am teaching myself PowerShell by writing a simple parser. I use the .Net framework class Collections.Stack. I want to modify the object at the top of the stack in place. I know I can pop() the object off, modify it, and then push() it back on, but that strikes me as inelegant. First, I tried this: $stk = new-object Collections.Stack $stk.push( (,'My first value') ) ( $stk.peek() ) += ,'| My second value' Which threw an error: Assignment failed because [System.Collections.Stack] doesn't contain a settable property 'peek()'. At C:\Development\StackOverflow\PowerShell-Stacks\test.ps1:3 char:12 + ( $stk.peek <<<< () ) += ,'| My second value' + CategoryInfo : InvalidOperation: (peek:String) [], RuntimeException + FullyQualifiedErrorId : ParameterizedPropertyAssignmentFailed Next I tried this: $ary = $stk.peek() $ary += ,'| My second value' write-host "Array is: $ary" write-host "Stack top is: $($stk.peek())" Which prevented the error but still didn't do the right thing: Array is: My first value | My second value Stack top is: My first value Clearly, what is getting assigned to $ary is a copy of the object at the top of the stack, so when I the object in $ary, the object at the top of the stack remains unchanged. Finally, I read up on teh [ref] type, and tried this: $ary_ref = [ref]$stk.peek() $ary_ref.value += ,'| My second value' write-host "Referenced array is: $($ary_ref.value)" write-host "Stack top is still: $($stk.peek())" But still no dice: Referenced array is: My first value | My second value Stack top is still: My first value I assume the peek() method returns a reference to the actual object, not the clone. If so, then the reference appears to be being replaced by a clone by PowerShell's expression processing logic. Can somebody tell me if there is a way to do what I want to do? Or do I have to revert to pop() / modify / push()?

    Read the article

  • Eclipse / Aptana File Sync Solutions

    - by Brad
    Our development team uses Eclipse + Aptana to do their web development work. Currently, most of them are mapping their Eclipse projects directly to the web server. I'd rather them create a local project and use that to sync to the web server project directory they are working on. The issue is that there aren't any good solutions which is just appalling given the popularity of the two. The FileSync plugin for Eclipse is only one-way. Meaning if another developer makes a change to the file on the server, another dev isn't even notified and could overwrite the change. The File Transfer option in Aptana 2.0 doesn't support any sort of Sync, just manually uploading/downloading files. The Sync option in Aptana 1.5.1 doesn't allow you to merge files when they are different. You can only update one or the other. It does however allow you to view a diff (but only if you right click and select) and in that diff you can't make any changes. I did find a way to allow files to be uploaded to their Sync repositories in Aptana using Eclipse Monkey. However it doesn't work if a user saves multiple files at once, 'Save All', again it doesn't work. And additionally, there is no notification if a user opens a local file that has an updated copy on the server. I tried to add one using Eclipse Monkey but I couldn't find any sort of listener in the Eclipse API to do it and any Eclipse Monkey documentation is far and few between. My only solution at this point is just to let them continue to map directly to the server or ask them to do a manual download before they do any work (but again what if someone uploads a change right after they do that). Anyone have any ideas?

    Read the article

  • Archive log transfer from Oracle 9i to Oracle 10g

    - by Jamie Love
    Hi all, I have a situation where I need to transfer Oracle 9i archive logs to an Oracle 10g database, from where they are to be mined by a log-miner and then used by an Oracle streams capture/apply processes. (Oracle 9 archive logs can be read by the Oracle 10 logminer - I can manually copy the archive logs across, manually register them and have them mined, captured then applied). The difficulty is that the way Oracle does archive log transfer changed quite a bit between 9i and 10g and setting up the 9i database to transfer to the remote machine like so: log_archive_dest_state_2 = enable log_archive_dest_2 = "service=OTHERMACHINE arch optional" no longer works. I get this in the 9i logs: *** 2009-05-22 04:03:44.149 RFS network connection lost at host 'OTHERMACHINE' Error 3113 attaching RFS server to standby instance at host 'OTHERMACHINE' Error 3113 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'OTHERMACHINE' Heartbeat failed to connect to standby 'OTHERMACHINE'. Error is 3113. *** 2009-05-22 04:03:44.150 kcrrfail: dest:2 err:3113 force:0 ORA-03113: end-of-file on communication channel And in the 10g log I get: Fri May 22 04:07:42 2009 WARNING: inbound connection timed out (ORA-3136) My question is: Does anyone know how I could configure my 9i or 10g server such that the 10g server will accept the 9i connection in such a way that I can transfer the 9i archive logs to the 10g server. It would be a bonus if the archive logs would be automatically registered in the 10g server. Note I have not set up a full DataGuard configuration here and the 10g database is not a secondary server. Thanks for any suggestions. Edit Note that I can log on to the 10g server from the 9i server via sqlplus, so connectivity is not the problem Edit 2 After a large amount of time searching for a solution, I've finally decided that such a mechanism doesn't work, and that a non-Oracle method of transferring archive logs from 9i to 10g will need to be used (e.g. rsync).

    Read the article

  • Magento install errors

    - by nXqd
    I try to install new magento version 1.4.xx . But after the config menu I meet after I copy and change local.example.xml to local.xml . Error in file: "E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Catalog\sql\catalog_setup\mysql4-install-1.4.0.0.0.php" - SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry '4-page_layout' for key 'entity_type_id' Trace: #0 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(374): Mage::exception('Mage_Core', 'Error in file: ...') #1 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(260): Mage_Core_Model_Resource_Setup->_modifyResourceDb('install', '', '1.4.0.0.21') #2 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(224): Mage_Core_Model_Resource_Setup->_installResourceDb('1.4.0.0.21') #3 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\Resource\Setup.php(153): Mage_Core_Model_Resource_Setup->applyUpdates() #4 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\App.php(363): Mage_Core_Model_Resource_Setup::applyAllUpdates() #5 E:\Soft\Programming\xampp\htdocs\magento\app\code\core\Mage\Core\Model\App.php(295): Mage_Core_Model_App->_initModules() #6 E:\Soft\Programming\xampp\htdocs\magento\app\Mage.php(596): Mage_Core_Model_App->run(Array) #7 E:\Soft\Programming\xampp\htdocs\magento\index.php(78): Mage::run('', 'store') #8 {main} I really need your help , thanks so much :)

    Read the article

  • Rails deployment strategies with Bundler and JRuby

    - by brad
    I have a jruby rails app and I've just started using bundler for gem dependency management. I'm interested in hearing peoples' opinions on deployment strategies. The docs say that bundle package will package your gems locally so you don't have to fetch them on the server (and I believe warbler does this by default), but I personally think (for us) this is not the way to go as our deployed code (in our case a WAR file) becomes much larger. My preference would be to mimic our MVN setup which fetches all dependencies directly on the server AFTER the code has been copied there. Here's what I'm thinking, all comments are appreciated: Step1: Build war file, copy to server Step2: Unpack war on server, fetch java dependencies with mvn Step3: use Bundler to fetch Gem deps (Where should these be placed??) * Step 3 is the step I'm a bit unclear on. Do I run bundle install with a particular target in mind?? Step4: Restart Tomcat Again my reasoning here is that I'd like to keep the dependencies separate from the code at deploy time. I'd also like to place all gem dependencies in the app itself so they are contained, rather than installing them in the app user's home directory (as, again, I believe is the default for Bundler)

    Read the article

  • Understanding how rpmbuild works

    - by ereOn
    Hi, For my work, I have to create a documentation on "How-to create a RPM package on Red Hat 5". I'm used to Debian and it's derivative (Ubuntu, and so on) and thus to Debian packages (aka. .deb files). It seems that the RPM logic is quite different from what I know already and I am having some issues understanding the "RPM logic". From what I read, it seems that ones need to be root to create a RPM package. While I understand why root could be required to install a package, I still don't understand why elevated privileges should be needed to just create one. If I try to create a RPM package as a user, changing the buildroot it fails on the %installstep because I don't have permission to write files into /usr/bin. Fair enough but... why does he want to copy my files into /usr/bin at this step ?! I just want to create the package, not install it ! I'm sure I'm missing something here. Is there anyone who could give me at least a basic understanding of how rpmbuild works and why ? Thank you very much !

    Read the article

  • InstallUtil Publishing WMI Schema to 64 Bit Directory Instead of 32 Bit Directory

    - by Nick
    This is similar to this question, but it doesn't look like a good solution was ever determined, so I'm opening a new one with clarified details. We wrote a .NET service, which among other things, publishes some of the class hierarchy using WMI. On a 64-Bit machine, we are running the 32-bit version of InstallUtil to install the service. It installs successfully, but when the service runs, we receive the following error message when publishing a WMI class using Instrumentation.Publish() DirectoryNotFoundException - (Could not find a part of the path 'C:\Windows\system32\WBEM\Framework\root\MyNamespace\MyService'.) However, this directory does exist in the C:\Windows\syswow64 directory. If we manually copy that directory structure to the system32 directory, then everything works. However, we are looking for automated solution, because we have this packaged up in an MSI which we distribute onto many servers. We have tried running the 64-Bit version of InstallUtil, to see if that would work, however... and this is the really weird part... it gives us an error on install that says Installing WMI Schema: Started An exception occurred during the Install phase. System.IO.DirectoryNotFoundException: Could not find a part of the path 'C:\Windows\system32\WBEM\Framework\root\MyNamespace\MyService.mof'. It looks as if somehow, the WMI installer flipped around. Has anyone else experienced this, or know of a work around?

    Read the article

  • Programmatically copying custom content type and columns from one web to another

    - by BeraCim
    Hi all: I'm experiencing a very stubborn problem when copying custom content type and its columns from one web to another within the same site. Basically, this is the code that I have: foreach (SPField field in existingWeb.Fields) { if (!destinationWeb.Fields.ContainsField(field.Title)) { destinationWeb.Fields.AddFieldAsXml(field.SchemaXml); destinationWeb.Update(); } } foreach (SPContentType existingWebCt in destinationWeb.ContentType) { SPContentType newContentType = new SPContentType(existingWebCt.Parent, destinationWeb.ContentTypes, existingWebCt.Name); foreach (SPFieldLink fieldLink in existingWebCt.FieldLinks) { SPField sourceField = existingWebCt.Fields[fieldLink.Id]; if (destinationWeb.Fields.ContainsField(sourceField.Title)) { SPFieldLink destinationWebFieldLink = new SPFieldLink(destinationWeb.Fields[sourceField.Title]); newContentType.FieldLinks.Add(destinationWebFieldLink); } } } existingWeb and destinationWeb are 2 webs within the same site. The code runs fine. But the problem is that in the SITE Content Type screen (under site settings), when I click the custom column link in the custom content type, I got an error saying: Invalid field name {UID}. The UID is the same UID as the custom column in the existing site. I checked with my web settings after completion. I can see the custom list (which I created with an item for testing purpose), but the custom column is gone from the view (though the actual data is still there... just have to check the box to get it to display). But I think that is less important... more of fyi. I've also gotten a variety of different exceptions should I copy things wrongly. Google has failed to help me out on this one. Does anyone know what I'm missing in order to get that link to work again? Thanks.

    Read the article

  • Windows Azure WebRole stuck in a deployment loop

    - by Rob G
    I've been struggling with this one for a couple of days now. My current Windows Azure WebRole is stuck in a loop where the status keeps changing between Initializing, Busy, Stopping and Stopped. It never goes live, and I can can never see the website as a result. The WebRole is an "out of the box" MVC 2 application with Copy Local set to true on the Mvc dll and I haven't even tried hooking up a storage or WorkerRole yet, and there is nothing really happening inside the Start method that I can see would crash. I've really tried going back to basics to ensure nothing can complicate the process and the website launches without a problem on the Dev Fabric and yes it looks just like the standard "Home", "About" MVC app - just can't get it running in the cloud! Funny thing is, a few days ago, this exact package worked on the staging area in the cloud, and I could even see it in the browser - but could never get it swapped over to production, so I deleted everything and started from scratch, and now I can't even get it running on staging... Does anyone have any ideas on what I could do to diagnose this problem myself because since logging this problem on the forums 2 days ago, there has been no improvement or feedback. Any help appreciated, Regards, Rob G

    Read the article

  • MonoRail: Testing, Route Extensions, Folder Structures

    - by Kezzer
    I've got a few questions related to the use of MonoRail Testing Does everyone tend to use NUnit for their testing? I haven't worked enough with testing to know if this is a good testing framework to use. I'm just looking to get more into testing my applications a lot more than before and wanted to know if there's any general guidelines. Are you supposed to copy the controller over to a test area and just rename it with test in the name and re-run it? How do you ensure your test project and main project coincide with one another? Is it just a case of copying everything over again or are there tools available to do it for you? Route Extensions MonoRail tends to use <action>.rails, can you omit the .rails part if you configure your routing correctly? Why does this seem to be the standard? Folder Structures I haven't found anywhere which really points out your standard folder structure. Sure, you have Controllers, Models, and Views. But your Models folder should contain your data access objects as well. I've seen some have something like -> Models -> DaoClasses -> Entities But what about custom structures used to get data out of views? And if you're using NHibernate, where's a good place to stick the mappings? I know it's entirely dependent on the developer, but I haven't really seen any standard approach. Cheers

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >