Search Results

Search found 1825 results on 73 pages for '64bit'.

Page 65/73 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Timeout reading verity collection - CF8

    - by Gary
    For a long time now I've been having a problem with using the verity search service bundled with ColdFusion 8. The issue is with timeout errors occurring when perfoming any operation on a collection. It's intermittent, and usually occurs after a few operations have been successfully performed. For instance: If I'm adding records to a collection the first, say 15 records, will go through with no problems, but all subsequent records will timeout until the service is rebooted. I'm on a shared server, Windows 2008, 64bit as far as I know. The error I receive is: "An error occurred while performing an operation in the Search Engine library. Error reading collection information.: com.verity.api.administration.ConfigurationException: java.io.IOException: Read timed out" Having spoken to my hosting company, and after doing some research, it's been suggested that the number of collections on a server may cause this issue. I've reduced the amount of collections I use, and there are currently 39 collections on the server. As I'm on a shared server, I have no control over how many collections other customers use, however I've read that the limit is 128 collections, so I don't see why 39 should cause it to become unusable. The collections aren't big, there's maybe around 5,000 records between all of them. Any ideas?

    Read the article

  • How to connect from ruby to MS Sql Server

    - by apetrov
    Hi Crowd! I'm trying to connect to the sql server 2005 database from *NIX machine: I have the following configuration: Linux 64bit ruby -v ruby 1.8.6 (2007-09-24 patchlevel 111) [x86_64-linux] important gems: dbd-odbc (0.2.4) dbi (0.4.1) active record sql server adapter - as plugin ruby-odbc 0.9996 (installed without any options.) unixODBC is installed freeTDS is installed cat /etc/odbcinst.ini [FreeTDS] Description = TDS driver (Sybase/MS SQL) Driver = /usr/lib/libtdsodbc.so Setup = /usr/lib/odbc/libtdsS.so CPTimeout = CPReuse = FileUsage = 1 DSN: DRIVER=FreeTDS;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" or DRIVER=/usr/lib/libtdsodbc.so;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" I receive the following error: >>ActiveRecord::Base.sqlserver_connection({"mode"=>"ODBC", "adapter"=>"sqlserver", "dsn"=>my_dns) DBI::DatabaseError: IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified from /usr/lib/ruby/1.8/DBD/ODBC/ODBC.rb:95:in `connect' from /usr/lib/ruby/1.8/dbi.rb:424:in `connect' from /usr/lib/ruby/1.8/dbi.rb:215:in `connect' from /opt/ublip/rails/current/vendor/plugins/activerecord-sqlserver-adapter/lib/active_record/connection_adapters/sqlserver_adapter.rb:47:in `sqlserver_connection' It looks like ODBC unable to find appropriate ODBC driver, but I have no ideas why. I had a problem with /usr/lib/libtdsodbc.so which is empty in default debian package free-tds dev, but i solved it with remove broken package and installation from sources. Will appreciate any thought! Thanks & Regards Note: I'm albe to connect using the same steps on mac 10.5

    Read the article

  • py2app, pyObjc & macports compilation errors

    - by Neewok
    Hi, I'm currently writing a small python app that embeds cherrypy and django using py2app. It worked well until I tried to include pyobjc in my project, since my app needed a small GUI (which consists of a small icon in the top menu bar + a drop down menu). I can run my python script without any problem (I'm using python 2.6 with macports), but I can't launch the application bundle generated by py2app. A dialog box appears with the following message: ImportError: dlopen(/Users/denis/tlon/standalone/mac/dist/django_cherry.app/Contents/Resources/lib/python2.6/lib-dynload/CoreFoundation/_inlines.so, 2): no suitable image found. Did find: /Users/denis/tlon/standalone/mac/dist/django_cherry.app/Contents/Resources/lib/python2.6/lib-dynload/CoreFoundation/_inlines.so: mach-o, but wrong architecture I did a quick : sudo port -u install py26-pyobjc +universal but for some reason macports tries to build openssl, with which compilation fails each time. It seems the problem is related to zLib - this is what appears in the logs : :info:build ld: warning: in /opt/local/lib/libz.dylib, file is not of required architecture ...And here is the output of file /opt/local/lib/libz.dylib : /opt/local/lib/libz.dylib: Mach-O universal binary with 2 architectures /opt/local/lib/libz.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64 /opt/local/lib/libz.dylib (for architecture i386): Mach-O dynamically linked shared library i386 Nothing looks wrong to me. I'm a bit stuck here. I don't even understand what openssl has to do with pyObjc, but it looks like I can't go anywhere if I don't manage to compile it. Macports really suck sometimes :/ EDIT I manage to fix Macports issue, but not py2app one. If I get it right, py2app try to create a 32-bits app, while Core Foundation files on Snow Leopard are for 64 bits architectures. Damn. Either I build this on Leopard, either I have to find a way to create a 64bit app with py2app, but then Snow Leopard only.

    Read the article

  • An annoying printing issue with Crystal Reports 2008

    - by Xience
    A little background: I have an extremely annoying printing issue with crystal reports. My environment is crystal reports 2008 SP2 on Windows 7 (64bit), Visual studio 2008 and .net framework 3.5 with all the latest updates for everything. The report is designed basically to render a small shelf label of the size (40mm width and 20mm height). In crystal when I set the page size to the above mentioned values and set orientation to portrait and take a preview, everything is displayed as i expect it to be and issuing a print command, it prints absolutely correct. The problem: The problem comes when i print this report from my program (in vb.net), dynamically setting data to some text fields, the result is that crystal somehow changes the print orientation, NOT the paper orientation as in portrait or landscape. Instead of printing from top left towards the bottom right, it rotates the whole output at 90 degrees to the left and reduces everything so small that it is barely visible, although it prints everything out. I have tested it on Intermec PF8t and Zebra GK420d label printers and a whole bunch of laser printers, but with the above stated page settings the output is always the same. Another strange thing that i noticed while experimenting with page sizes if i switch to landscape mode, the print out is correct in its font sizes and positions but then the text gets truncated due to overflowing the page size. Can anyone help me with this. Does crystal has anything like its own print drivers or something. I have tried to ensure to the best of my abilities that it is not a printer driver problem.

    Read the article

  • Debugging Maya plug-ins: can't get output or debugger to engage

    - by brainjam
    I'm not a total beginner at Maya, but this is my first time trying to write a plug-in for it. I've downloaded the 30-day eval version of Maya2011 (32bit version on 64bit Windows 7), and have tried building a couple of plug-ins with VC 2008 Express. The first one is helloWorldCmd from the sample directory ..\devkit\plug-ins, and it basically doesn't work until you convert the line cout<<"Hello World"<<endl; to cerr<<"Hello World"<<endl;, but I can live with that. The second one is the SelectRingContext2 example from David Gould's book. The plug-in works as advertised, but I cannot get any debugging output from it. I've tried putting cout, cerr,printf, and MGlobal::displayInfo into the doIt() method, and can't get a peep. I also haven't figured out how to run the plug-in in a debugger. I'm afraid I'm missing something easy but slightly obscure, like a flag somewhere. Anybody out there have any hints? Edit: Turns out the action is happening in the redoIt() method. So I can get MGlobal::displayInfo and cerr producing output. Still don't know why cout and printf don't work, and I'm not sure how to run in the debugger. When I run maya -d (which some of the online advice says I should do) it just shows the output window but never loads the rest.

    Read the article

  • Possible Data Execution Prevention problem in Windows 7

    - by Joel in Gö
    I have a serious problem with my .Net program. It calls a native dll, and then crashes instantly because it can't find a native method. This is behaviour we have seen before, whereby the C# compiler, in its infinite wisdom, sets the flag that the program is DEP compatible, even if it calls a native dll which patently is not. We have the standard workaround for this, where the flag is set to Not DEP Compatible in a post-build step, and this works fine. Everywhere except on my machine. I have Windows 7 32bit, and the program works fine on the Win 7 64bit machines that we have, as well as on Vista and XP; we have not yet been able to check on another Win7 32bit. However, on my machine the DataExecutionPolicy_SupportPolicy is 0, i.e. we have successfully switched DEP off. The dll in question also works fine when called from a native program. We are running out of ideas... any help would be much appreciated!

    Read the article

  • [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified - works

    - by Matt
    Hello, I am developing a java app (with odbc bridge - forgive me - the only paradox driver I have been able to obtain is the microsoft odbc driver) - which works fine while in eclipse, (and netbeans) - connecting and obtaining data from an ancient paradox 5.x database. So long as it is run from inside my IDE - it compiles and runs flawlessly. When I export it to a runable jar, suddenly [code][Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified[/code] occurs. The jar is being run on the same box as my developing IDE - so I am confused about the cause. It is being run via console from a user account, as per the IDE. My connection string is "jdbc:odbc:Driver={Microsoft Paradox Driver (*.db )};DriverID=538; Fil=Paradox 5.X; DefaultDir=C:\paradox\database\location\" - obtained from connectionstrings.com - and as mentioned before, working fine while run from the IDE. The above seems to 'magically' create its own connection, avoiding the setup of a dsn - I am unsure quite how it does - but it works. The only other thing I can think that might be pertinent is that my PC is a 64bit o/s (windows server 2008). Please help, any suggestions or comments will be greatly appreciated. Thanks, Matt

    Read the article

  • Can't place a breakpoint in asp.net master page file

    - by Tony_Henrich
    I have an MVC web application. I get an "Object reference not set to an instance of an object" error in line 16 below. It's a master page file. When I try to place a breakpoint in that line or anywhere in the file, I get a "this is not a valid location for a breakpoint" error. I have clicked on every line and I can't place a single breakpoint. I do have lines which have code only. How do I place a breakpoint in this file? Note: I can place breakpoints in code files. In some other aspx files, I can place a breakpoint in some code lines and some not. Does the inline code have to be in a special format to place a breakpoint? Using VS 2010 in Windows 7 64bit. Code: Line 14: <div id="<%= Model.PageWidth %>" class="<%= Model.PageTemplate %>"> Line 15: <div id="hd"> Line 16: <h1><a href="/"><%= Model.Workspace.Title %></a></h1> Line 17: <h2><%= Model.Workspace.Subtitle %></h2> Line 18: </div>

    Read the article

  • Actual long double precision does not agree with std::numeric_limits

    - by dmb
    Working on Mac OS X 10.6.2, Intel, with i686-apple-darwin10-g++-4.2.1, and compiling with the -arch x86_64 flag, I just noticed that while... std::numeric_limits<long double>::max_exponent10 = 4932 ...as is expected, when a long double is actually set to a value with exponent greater than 308, it becomes inf--ie in reality it only has 64bit precision instead of 80bit. Also, sizeof() is showing long doubles to be 16 bytes, which they should be. Finally, using gives the same results as . Does anyone know where the discrepancy might be? long double x = 1e308, y = 1e309; cout << std::numeric_limits::max_exponent10 << endl; cout << x << '\t' << y << endl; cout << sizeof(x) << endl; gives 4932 1e+308 inf 16

    Read the article

  • Alternate cause of BadImageFormatException in .NET Assembly?

    - by Phillip Knauss
    I'm working on a .NET 3.5 console application in C# which uses a VC++ unmanaged DLL. It ran without a problem when I worked on it a few weeks ago, but I'm coming back to it today and am now getting a BadImageFormatException ("An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)). My development workstation is running 64bit Windows 7, and I do a fair amount of work with unmanaged code, so I immediately checked that the .NET assembly and the VC++ library both had x86 targets. They did. Just to be sure, I cleaned and rebuilt the VC++ library and the .NET assembly, to no avail. Neither system is doing anything particularly unusual. The VC++ library loads a binary data file and does some mathematical processing on its contents. The .NET assembly has the DllImports for the library and some code to wire it up. This all worked a few weeks ago. So now I'm left wondering if there's some other cause of BadImageFormatException that's less common than an x86/x64 conflict that I might be running into. Thanks.

    Read the article

  • JVM terminates when launching eclipse with J2SE 6.0 on mac os x (need J2SE 6.0 for Oracle enterprise

    - by rooban bajwa
    I know my issue has party been addressed at this link http://stackoverflow.com/questions/245803/jvm-terminates-when-launching-eclipse-mat-on-mac-os-with-j2se-60 but it was a year+ ago.. plus the link that's provided in there http://landonf.bikemonkey.org/static/soylatte/ does not seem to be alive (i mean the download section on that link no longer provide the 32-bit port of j2se 6.0 for mac osx 10.5) I am trying to run eclipse 3.5 on mac OSX 10.5. It works fine with J2SE 5.0. But when I installed the Oracle enterprise pack for eclipse - it requires to start eclipse with J2SE 6.0 JVM otherwise it will get disabled. Here's the exact message I get from it - "You are running Eclipse on Java VM version: 1.5.0_22 Oracle Enterprise Pack for Eclipse requires Java version 6 or higher. Click next to configure a compatible Java VM." It asks me to point to J2SE 6.0 JVM, when I do that (i.e point it to "/System/Library/Frameworks/JavaVM.framework/Versions/1.6.0/Home") , it asks to restart eclipse , when I do that, eclipse just bombs .. with JVM terminated error .. SO I need to start eclipse with J2SE 6.0 JVM but eclipse needs carbon which is only available in 32 bits and hence I cann't start eclipse with J2SE 6.0 JVM which is only available in 64bit mode from mac. And the site providing 32 bit port of J2SE 6.0 JVM does not seem to be active anymore.. Can someone help me on this issue, Thanks in advance,

    Read the article

  • Possible Data Execution Prevention (DEP) problem in Windows 7

    - by Joel in Gö
    I have a serious problem with my .Net program. It calls a native dll, and then crashes instantly because it can't find a native method. This is behaviour we have seen before, whereby the C# compiler, in its infinite wisdom, sets the flag that the program is DEP compatible, even if it calls a native dll which patently is not. We have the standard workaround for this, where the flag is set to Not DEP Compatible in a post-build step, and this works fine. Everywhere except on my machine. I have Windows 7 32bit, and the program works fine on the Win 7 64bit machines that we have, as well as on Vista and XP; we have not yet been able to check on another Win7 32bit. However, on my machine the DataExecutionPolicy_SupportPolicy is 0, i.e. we have successfully switched DEP off. Does anyone know whether there is some situation in which it can still act? Or any other mechanism which could have the same effect? The dll in question also works fine when called from a native program. We are running out of ideas... any help would be much appreciated!

    Read the article

  • Dropping all user tables/sequences in Oracle

    - by Ambience
    As part of our build process and evolving database, I'm trying to create a script which will remove all of the tables and sequences for a user. I don't want to do recreate the user as this will require more permissions than allowed. My script creates a procedure to drop the tables/sequences, executes the procedure, and then drops the procedure. I'm executing the file from sqlplus: drop.sql: create or replace procedure drop_all_cdi_tables is cur integer; begin cur:= dbms_sql.OPEN_CURSOR(); for t in (select table_name from user_tables) loop execute immediate 'drop table ' ||t.table_name|| ' cascade constraints'; end loop; dbms_sql.close_cursor(cur); cur:= dbms_sql.OPEN_CURSOR(); for t in (select sequence_name from user_sequences) loop execute immediate 'drop sequence ' ||t.sequence_name; end loop; dbms_sql.close_cursor(cur); end; / execute drop_all_cdi_tables; / drop procedure drop_all_cdi_tables; / Unfortunately, dropping the procedure causes a problem. There seems to cause a race condition and the procedure is dropped before it executes. E.g.: SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 30 18:45:42 2010 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Procedure created. PL/SQL procedure successfully completed. Procedure created. Procedure dropped. drop procedure drop_all_user_tables * ERROR at line 1: ORA-04043: object DROP_ALL_USER_TABLES does not exist SQL Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64 With the Partitioning, OLAP, Data Mining and Real Application Testing options Any ideas on how to get this working?

    Read the article

  • Problem with SQLite related nUnit-tests after upgrade to VS2010 and Re#5

    - by stiank81
    After converting to Visual Studio 2010 with ReSharper5 some of my unit tests started failing. More specifically this applies to all unit tests that use NHibernate with SQLite. The problem seem to be related to SQLite somehow. The unit tests that does not involve NHibernate and SQLite are still running fine. The exception is as follows: NHibernate.HibernateException : Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.2.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4. ----> System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation. ----> NHibernate.HibernateException : The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use <qualifyAssembly/> element in the application configuration file to specify the full name of the assembly. TearDown : System.NullReferenceException : Object reference not set to an instance of an object. The exception is the NullReferenceException on TearDown when cleaning up NHibernate objects that wasn't successfully created, but the problem seem to be related to SQLite somehow. I run my unit tests through ReSharper, but I get the same exception when running them directly through the NUnit.exe application. However, running them through the x86 variant (NUnit-x86.exe) all tests run fine. Can it be related to some mixing of 64bit and 32bit dlls? It still runs fine through VS2008 + ReSharper4.5. Note that the target framework of my projects still is .NET3.5. Anyone seen this problem before?

    Read the article

  • Bootstrapper (setup.exe) says ".NET 3.5 not found" but launching .msi directly installs application

    - by Marek
    Our installer generates a bootstrapper (setup.exe) and a MSI file - a pretty common scenario. One of the production machines reports a strange problem during install: If the user launches the bootstrapper (setup.exe), it reports that .NET 3.5 is not installed. This happens with account under administator group. No matter if they launch it as administrator or not, same behavior. the application installs fine when application.msi or OurInstallLauncher.exe (see below for explanation) is started directly no matter if run as administrator is applied. We have checked that .NET is installed on the machine (both 64bit and 32bit "versions" = under both C:\Windows\Microsoft.NET\Framework64 and C:\Windows\Microsoft.NET\Framework there is a folder named v3.5. This happens on a 64 bit Windows 7. I can not reproduce it on my development 64 bit Windows 7. On Windows XP and Vista, it has worked without any problem for a long time so far. Part of our build script that declares the GenerateBootStrapper task (nothing special): <ItemGroup> <BootstrapperFile Include="Microsoft.Windows.Installer.3.1"> <ProductName>Microsoft Windows Installer 3.1</ProductName> </BootstrapperFile> <BootstrapperFile Include="Microsoft.Net.Framework.3.5"> <ProductName>Microsoft .NET Framework 3.5</ProductName> </BootstrapperFile> </ItemGroup> <GenerateBootstrapper ApplicationFile=".\Files\OurInstallLauncher.exe" ApplicationName="App name" Culture="en" ComponentsLocation ="HomeSite" CopyComponents="True" Validate="True" BootstrapperItems="@(BootstrapperFile)" OutputPath="$(OutSubDir)" Path="$(SdkBootstrapperPath)" /> Note: OurInstallLauncher.exe is language selector that applies a transform to the msi based on user selection. This is not relevant to the question at all because the installer never gets as far as launching this exe! It displays that .NET 3.5 is missing right after starting setup.exe. Has anyone seen this behavior before?

    Read the article

  • Exception: "Given final block not properly padded" in Linux, but it works in Windows

    - by user1685364
    My application works in windows, but fails in Linux with Given final block not properly padded exception. Configuration: JDK Version: 1.6 Windows : version 7 Linux : CentOS 5.8 64bit My code is below: import java.io.IOException; import java.io.UnsupportedEncodingException; import java.security.InvalidKeyException; import java.security.Key; import java.security.NoSuchAlgorithmException; import java.security.SecureRandom; import javax.crypto.BadPaddingException; import javax.crypto.Cipher; import javax.crypto.IllegalBlockSizeException; import javax.crypto.KeyGenerator; import javax.crypto.NoSuchPaddingException; import sun.misc.BASE64Decoder; import sun.misc.BASE64Encoder; public class SecurityKey { private static Key key = null; private static String encode = "UTF-8"; private static String cipherKey = "DES/ECB/PKCS5Padding"; static { try { KeyGenerator generator = KeyGenerator.getInstance("DES"); String seedStr = "test"; generator.init(new SecureRandom(seedStr.getBytes())); key = generator.generateKey(); } catch(Exception e) { } } // SecurityKey.decodeKey("password") public static String decodeKey(String str) throws Exception { if(str == null) return str; Cipher cipher = null; byte[] raw = null; BASE64Decoder decoder = new BASE64Decoder(); String result = null; cipher = Cipher.getInstance(cipherKey); cipher.init(Cipher.DECRYPT_MODE, key); raw = decoder.decodeBuffer(str); byte[] stringBytes = null; stringBytes = cipher.doFinal(raw); // Exception!!!! result = new String(stringBytes, encode); return result; } } At the line: ciper.doFilnal(raw); the following exception is thrown: javax.crypto.BadPaddingException: Given final block not properly padded How can I fix this issue?

    Read the article

  • Deployment Setup (.Net) - Search target machine -> Registry search (64 bit)

    - by Joonas Kirsebom
    I have a windows installer project which installs some software (winform, service, mce addin). During the installation I need to search the machine for a registry key. This is done with with the "Launch Condition" - "Add Registry Search" (Deployment Project). I have filled out all the properties right, and checked against the regestry that the value actually can be found. The problem is that the "Registry Search" searches in the x86 part of the registry (HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\...) although my system is a x64 and the deployment setup is also set to x64. Does anyone know how to force the "Registry Search" to search the x64 registry? Or know about a workaround? The weird thing about this, is that Registry setting in the deployment setup is writing to the right registry (x64). My idea is that the "Registry Search" program is only developed to the x86 architecture, and therefore can't read the right registry. I found this article from microsoft, so it seams that they know about this problem. https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=110105&wa=wsignin1.0#details My system is: Windows 7 64bit Visual Studio 2008

    Read the article

  • SendInput scan code on Windows 7 x64

    - by Stanomatic
    I am working with a WPF application sending keys to a game. I opened spy++ to observer s as a key press on the keyboard. I then press my button on the application and I noticed a different scan code in spy++ messages. Could this be somthing to do with Windows 7 64bit? Partial listing: var down = new INPUT(); down.Type = (UInt32)InputType.KEYBOARD; down.Data.Keyboard = new KEYBDINPUT(); down.Data.Keyboard.Vk = (UInt16)keyCode; down.Data.Keyboard.Scan = 0; down.Data.Keyboard.Flags = 0; down.Data.Keyboard.Time = 0; down.Data.Keyboard.ExtraInfo = IntPtr.Zero; //down.Data.Keyboard.ExtraInfo = GetMessageExtraInfo(); var up = new INPUT(); up.Type = (UInt32)InputType.KEYBOARD; up.Data.Keyboard = new KEYBDINPUT(); up.Data.Keyboard.Vk = (UInt16)keyCode; up.Data.Keyboard.Scan = 0; up.Data.Keyboard.Flags = (UInt32)KeyboardFlag.KEYUP; up.Data.Keyboard.Time = 0; up.Data.Keyboard.ExtraInfo = IntPtr.Zero; //up.Data.Keyboard.ExtraInfo = GetMessageExtraInfo(); INPUT[] inputList = new INPUT[2]; inputList[0] = down; inputList[1] = up; var numberOfSuccessfulSimulatedInputs = SendInput(2, inputList, Marshal.SizeOf(typeof(INPUT))); The image shows when I use my code to send a key I receive ScanCode:00fExtended from spy++ message output. When I actually press the same key I receive ScanCode:1FfExtended. Everything else is identical.

    Read the article

  • Git : Failed at pushing to remote server, ' REPOSITORY_PATH ' is not a git command

    - by Judarkness
    I'm using Git with TortoiseGit on Windows XP, and I have a remote bare repository on Windows Vista 64bit version. When I tried to push my local files to remote bare repository, I got the following error message. git.exe push "origin" master:master git: 'C:/Git_Repository/.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly the arbitrary URL is : username@serverip:C:/Git_Repository/.git The same arbitrary URL worked just fine while doing clone/fetch/pull. Access from a local directory in remote machine to this bare repository has no problem either so I belive there is something wrong with my path. I can push/pull at GitHub correctly but I was using URL provide by GitHub. Does anyone know what's wrong with my configuration? Here is my remote .git/config [core] repositoryformatversion = 0 filemode = false bare = true logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly Here is my local .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true hideDotFiles = dotGitOnly [remote "origin"] fetch = +refs/heads url = username@serverip:C:/Git_Repository/.git [branch "master"] remote = origin merge = refs/heads/master Thanks for the reminding

    Read the article

  • Cross-platform and language (de)serialization

    - by fwgx
    I'm looking for a way to serialize a bunch of C++ structs in the most convenient way so that the serialization is portable across C++ and Java (at a minimum) and across 32bit/64bit, big/little endian platforms. The structures to be serialized just contain data, i.e. they're pure data objects with no state or behavior. The idea being that we serialize the structs into an octet blob that we can store in a database "generically" and be read out later on. Thus avoiding changing the database whenever a struct changes and also avoiding assigning each data member to a field - i.e. we only want one table to hold everything "generically" as a binary blob. This should make less work for developers and require less changes when structures change. I've looked at boost.serialize but don't think there's a way to enable compatibility with Java. And likewise for inheriting Serializable in Java. If there is a way to do it by starting with an IDL file that would be best as we already have IDL files that describe the structures. Cheers in advance!

    Read the article

  • Qt/C++, Problems with large QImage

    - by David Günzel
    I'm pretty new to C++/Qt and I'm trying to create an application with Visual Studio C++ and Qt (4.8.3). The application displays images using a QGraphicsView, I need to change the images at pixel level. The basic code is (simplified): QImage* img = new QImage(img_width,img_height,QImage::Format_RGB32); while(do_some_stuff) { img->setPixel(x,y,color); } QGraphicsPixmapItem* pm = new QGraphicsPixmapItem(QPixmap::fromImage(*img)); QGraphicsScene* sc = new QGraphicsScene; sc->setSceneRect(0,0,img->width(),img->height()); sc->addItem(pm); ui.graphicsView->setScene(sc); This works well for images up to around 12000x6000 pixel. The weird thing happens beyond this size. When I set img_width=16000 and img_height=8000, for example, the line img = new QImage(...) returns a null image. The image data should be around 512,000,000 bytes, so it shouldn't be too large, even on a 32 bit system. Also, my machine (Win 7 64bit, 8 GB RAM) should be capable of holding the data. I've also tried this version: uchar* imgbuf = (uchar*) malloc(img_width*img_height*4); QImage* img = new QImage(imgbuf,img_width,img_height,QImage::Format_RGB32); At first, this works. The img pointer is valid and calling img-width() for example returns the correct image width (instead of 0, in case the image pointer is null). But as soon as I call img-setPixel(), the pointer becomes null and img-width() returns 0. So what am I doing wrong? Or is there a better way of modifying large images on pixel level? Regards, David

    Read the article

  • Compiler Errors...it ran yesterday!?

    - by howdytest
    This is a pre-existing Java project being run in Eclipse 3.5.2 32 bit.. Day 1: Install Java SE 6 Update 20 JDK. Experience Crash in Eclipse. Install Java 5. Same problem-(uninstall java 5). Re-install Java 6. Install Eclipse 3.3.1. Install Eclipse 3.5.2. 32-bit. No problems. Run Eclipse 3.5.2. 64-bit. No problems. Set up the project, configure, and run. No problems. Day 2: Load Eclipse to start a new project. Previous project now has 940 errors. Error Type is "Java Problem". The project ran 100% without a problem on Day 1. The only thing that happened between Day 1 and Day 2 was restarting my computer. I just tried to recreate the project, step by step, and am still getting the same errors. I know it's not the code -- it was working. Not to mention that it's an opensource project, such a problem would be documented. I'm thinking something is wrong with my Java install. Or, perhaps, it's a 32-bit/64-bit problem. I'm running win7 64bit. So before formatting my window's partition, I thought I'd throw the problem your way to see if anyone knows what's going on. Thanks.

    Read the article

  • (Not So) Silly Objective-C inheritance problem when using property - GCC Bug?

    - by Ben Packard
    Update 2 - Many people are insisting I need to declare an iVar for the property. Some are saying not so, as I am using Modern Runtime (64 bit). I can confirm that I have been successfully using @property without iVars for months now. Therefore, I think the 'correct' answer is an explanation as to why on 64bit I suddenly have to explicitly declare the iVar when (and only when) i'm going to access it from a child class. The only one I've seen so far is a possible GCC bug (thanks Yuji). Not so simple after all... Update - I messed up one line of the original copy and paste - corrected. The @property call was missing (nonatomic, retain) but is a red herring - STILL NEED AN ANSWER! Thanks. I've been scratching my head with this for a couple of hours - I haven't used inheritance much. Here I have set up a simple Test B class that inherits from Test A, where an ivar is declared. But I get the compilation error that the variable is undeclared. This only happens when I add the property and synthesize declarations - works fine without them. TestA Header: #import <Cocoa/Cocoa.h> @interface TestA : NSObject { NSString *testString; } @end TestA Implementation is empty: #import "TestA.h" @implementation TestA @end TestB Header: #import <Cocoa/Cocoa.h> #import "TestA.h" @interface TestB : TestA { } @property (nonatomic, retain) NSString *testProp; @end TestB Implementation (Error - 'testString' is undeclared) #import "TestB.h" @implementation TestB @synthesize testProp; - (void)testing{ NSLog(@"test ivar is %@", testString); } @end

    Read the article

  • Excel 2010 64 bit can't create .net object

    - by aboes81
    I have a simple class library that I use in Excel. Here is a simplification of my class... using System; using System.Runtime.InteropServices; namespace SimpleLibrary { [ComVisible(true)] public interface ISixGenerator { int Six(); } public class SixGenerator : ISixGenerator { public int Six() { return 6; } } } In Excel 2007 I would create a macro enabled workbook and add a module with the following code: Public Function GetSix() Dim lib As SimpleLibrary.SixGenerator lib = New SimpleLibrary.SixGenerator Six = lib.Six End Function Then in Excel I could call the function GetSix() and it would return six. This no longer works in Excel 2010 64bit. I get a Run-time error '429': ActiveX component can't create object. I tried changing the platform target to x64 instead of Any CPU but then my code wouldn't compile unless I unchecked the Register for COM interop option, doing so makes it so my macro enable workbook cannot see SimpleLibrary.dll as it is no longer regsitered. Any ideas how I can use my library with Excel 2010 64 bit?

    Read the article

  • Where did Pylons beautiful error handling go? Using Nginx + Paster + Flup#fcgi_thread

    - by Tony
    I need to run my development through nginx due to some complicated subdomain routing rules in my pylons app that wouldn't be handled otherwise. I had been using lighttpd + paster + Flup#scgi_thread and the nice error reporting by Pylons had been working fine in that environment. Yesterday I recompiled Python and MySQL for 64bit, and also switched to Ngix + paster + Flup#fcgi_thread for my development environment. Everything is working great, but I miss the fancy error reports. This is what I get now, and it is a mess compared to what I got used to: http://drp.ly/Iygeg . And here are the pylons/nginx configs. Pylons: [server:main] use = egg:Flup#fcgi_thread host = 0.0.0.0 port = 6500 Nginx: location / { #include /usr/local/nginx/conf/fastcgi.conf; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_pass 127.0.0.1:6500; }

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >