Search Results

Search found 353 results on 15 pages for 'recompile'.

Page 11/15 | < Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >

  • How do I uncompress vmlinuz to vmlinux?

    - by Lord Loh.
    I have already tried uncompress, gzip, and all other solutions that come up as google results and these have not worked for me. To get just the image search for the GZ signature - 1f 8b 08 00. > od -A d -t x1 vmlinuz | grep '1f 8b 08 00' 0024576 24 26 27 00 ae 21 16 00 1f 8b 08 00 7f 2f 6b 45 so the image begins at 24576+8 => 24584. Then just copy the image from the point and decompress it - > dd if=vmlinuz bs=1 skip=24584 | zcat > vmlinux 1450414+0 records in 1450414+0 records out 1450414 bytes (1.5 MB) copied, 6.78127 s, 214 kB/s Got these instructions verbatim from a forum online: http://www.codeguru.com/forum/showthread.php?t=415186 This process does not work for me and end up giving errors that states file not found 0024576 and all subsequent numbers. How do I proceed extracting vmlinux from vmlinuz? Thank you. EDITED: This is a reverse engineering question. I have no access to the distro to install any RPM or recompile. I start with nothing but vmlinuz.

    Read the article

  • yum fails installing php53-devel.x86_64

    - by coding_hero
    I need to recompile php on a Fedora server because I need to use the --enable-zip flag. When trying to install the devel package, I get the following message. This is after a 'yum clean all': yum install php53-devel.x86_64 Loaded plugins: rhnplugin, security rhel-x86_64-server-5 | 1.4 kB 00:00 rhel-x86_64-server-5/primary | 4.9 MB 00:00 rhel-x86_64-server-5 14161/14161 Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package php53-devel.x86_64 0:5.3.3-13.el5_8 set to be updated --> Processing Dependency: php53 = 5.3.3-13.el5_8 for package: php53-devel --> Finished Dependency Resolution php53-devel-5.3.3-13.el5_8.x86_64 from rhel-x86_64-server-5 has depsolving problems --> Missing Dependency: php53 = 5.3.3-13.el5_8 is needed by package php53-devel-5.3.3-13.el5_8.x86_64 (rhel-x86_64-server-5) Error: Missing Dependency: php53 = 5.3.3-13.el5_8 is needed by package php53-devel-5.3.3-13.el5_8.x86_64 (rhel-x86_64-server-5) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest Output of 'yum repolist': # yum repolist Loaded plugins: rhnplugin, security repo id repo name status rhel-x86_64-server-5 Red Hat Enterprise Linux (v. 5 for 64-bit x86_64) enabled: 14,161 repolist: 14,161

    Read the article

  • Conflicts with file from package mysql-5.0.77

    - by Whiteyq
    I'm trying to install APC (Alternative PHP Cache) on a CentOs dedicated server. I've everything done apart from configuring phpize. Running :yum -y install php-devel gives me the following error file /usr/share/mysql/charsets/Index.xml from install of mysql-libs-5.1.57-1.el5.art.x86_64 conflicts with file from package mysql-5.0.77-4.el5_5.3.i386 etc etc for other languages So, i think the mysql version i have is too old & i more than likely need to upgrade mysql to version 5.1. Im reluctant to do this as a) its a live server (although only 3/4 domains) b) ive read ill read to recompile php if i upgrade To add to this i have plesk installed for managing domains & might need reinstalling/reconfiguring also. sorry for the long intro but its my first post & best to give as much info as possible, so my question is basically Is there any way i can run :yum -y install php-devel to get phpize working to complete installation of APC for the version of mysql i currently have installed? ie 5.0.77

    Read the article

  • Renaming VLAN Interfaces in Linux

    - by rhololkeolke
    I need to know how to rename VLAN interfaces. I'm currently running Ubuntu 11.04. I'm running a networking application that takes frames on one interface applies things like delays and errors and then forwards the frames out another interface. The default naming convention which names things <interface>.<vlan> e.g. eth0.2 will not work for my purposes because the program which parses the configuration script for the networking application doesn't like the decimal in the interface name. I ran vconfig set_name_type VLAN_PLUS_VID which solves the decimal in the interface name problem, however, I can then no longer assign the same vlan id to multiple interfaces because they have the same name. I know how to change physical interface names using udev rules, but because the vlan's will have the same MAC address and they aren't physical interfaces I can't use those rules to rename the interfaces. Is there a way to rename any interface in linux, including the virtual ones? Is there a way to specify your own naming convention for config set_name_type option without having to recompile the source of vconfig?

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • Getting Started with NASM

    - by MarkPearl
    Today I got to play with NASM. This is an assembler and disassembler that can be used to write 16-bit, 32-bit & 64-bit programs. Let me say upfront that the last time I looked at assembly code at any depth was when I was studying Computer Science in Pietermaritzburg – ten years ago – and we never ever got to touch any real assembly code so a lot of what I am looking at today is very new to me. The first thing I did was download NASM compiler. This turned out to be a bit more complicated than I thought. Originally I went to http://www.nasm.us/ and downloaded the nasm-2.09.04.zip file which I thought had all I needed. No luck! It seemed to just have the uncompiled code, and from what I could tell I would need to recompile and build it – possibly in c++? Well, I wasn’t going to waste my time with that, so a bit more searching and I found the Win32 (http://www.nasm.us/pub/nasm/releasebuilds/2.09.04/win32/) folder Nasm.exe which I downloaded. Choosing an IDE So, I have NASM compiler but to compile anything you need to pass a string of special characters in the command prompt. That’s fine if I was going to just do one program once every couple of years, but since I am aiming to do quite a bit more exploration of NASM I began searching for an IDE. There were a few options, even apparently Visual Studio with a bit of tweeking could do the job, but from past experience I wanted to avoid the VS route as it can sometimes get confusing. I eventually settled on TextPad which I had used a few years ago for a similar project and it had been simple enough yet powerful enough to do the job. A bit of searching and I found a syntax file for NASM and everything seemed hunky dory. Configuring TextPad to run the NASM Compiler Next was to get TextPad to run the NASM compiler. TextPad has this external tools option that allows one to configure special commands. To simplify the process I first created a bat file in the NASM directory that allowed me to simply compile asm files. The bat file was called as.bat and had just one line of code… nasm -f bin %1.asm -o %1.com -l %1.lst Once I had created as.bat I just needed to go into TextPad and create a tool. I have made a quick video of that just showing you where the various settings are which is viewable below. The 64Bit Problem So I now have an ‘IDE’ linked to my NASM compiler so everything should be fine right? No! Whenever I tried to compile an asm program it compiles fine, but when I try and run it I get an error – “This version of the file is not compatible with the version Windows you’re running. Check your computer’s system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher." Well.. it turns out there are a few complications with having a 64 bit OS! So after searching google and coming to any real solution that I could find other than perhaps attempting to build the code for nasm, I eventually resorted to running a VM with Windows XP on it and putting NASM there… My first hello world program So I attempt my first hello world program as per an example I found… the code was quite simple and is shown below… bits16 org 0x100 jmp main message: db 'Hello World',0ah,0dh,'$' main: mov dx,message mov ah,09 int 21h int 20h Running the build tool from TextPad and everything compiles fine and I now have a console app with helllo world shown. Conclusion It’s very early days with NASM. I have been spoilt with Visual Studio and high order languages so I assume it will be a painful ride getting into the basics of assembly programming but I am hoping that at the end of it, I will at least have a bit more exposure to a language closer to the metal.

    Read the article

  • Introducing the Oracle Linux Playground yum repo

    - by wcoekaer
    We just introduced a new yum repository/channel on http://public-yum.oracle.com called the playground channel. What we started doing is the following: When a new stable mainline kernel is released by Linus or GregKH, we internally build RPMs to test it and do some QA work around it to keep track of what's going on with the latest development kernels. It helps us understand how performance moves up or down and if there are issues, we try to help look into them and of course send that stuff back upstream. Many Linux users out there are interested in trying out the latest features but there are some potential barriers to do this. (1) in general, you are looking at an upstream development distribution, which means that everything changes both in userspace(random applications) and kernel. Projects like Fedora are very useful and someone that wants to just see how the entire distribution evolves with all the changes, this is a great way to be current. A drawback here, though, is that if you have applications that are not part of the distribution, there's a lot of manual work involved or they might just not work because the changes are too drastic. The introduction of systemd is a good example. (2) when you look at many of our customers, that are interested in our database products or applications, the starting point of having a supported/certified userspace/distribution, like Oracle Linux, is a much easier way to get your feet wet in seeing what new/future Linux kernel enhancements could do. This is where the playground channel comes into play. When you install Oracle Linux 6 (which anyone can download and use from http://edelivery.oracle.com/linux), grab the latest public yum repository file http://public-yum.oracle.com/public-yum-ol6.repo, put it in /etc/yum.repos.d and enable the playground repo : [ol6_playground_latest] name=Latest mainline stable kernel for Oracle Linux 6 ($basearch) - Unsupported baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/playground/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 Now, all you need to do : type yum update and you will be downloading the latest stable kernel which will install cleanly on Oracle Linux 6. Thus you end up with a stable Linux distribution where you can install all your software, and then download the latest stable kernel (at time of writing this is 3.6.7) without having to recompile a kernel, without having to jump through hoops. There is of course a big, very important disclaimer this is NOT for PRODUCTION use. We want to try and help make it easy for people that are interested, from a user perspective, where the Linux kernel is going and make it easy to install and use it and play around with new features. Without having to learn how to compile a kernel and without necessarily having to install a complete new distribution with all the changes top to bottom. So we don't or won't introduce any new userspace changes, this project really is around making it easy to try out the latest upstream Linux kernels in a very easy way on an environment that's stable and you can keep current, since all the latest errata for Oracle Linux 6 are published on the public yum repo as well. So one repository location for all your current changes and the upstream kernels. We hope that this will get more users to try out the latest kernel and report their findings. We are always interested in understanding stability and performance characteristics. As new features are going into the mainline kernel, that could potentially be interesting or useful for various products, we will try to point them out on our blogs and give an example on how something can be used so you can try it out for yourselves. Anyway, I hope people will find this useful and that it will help increase interested in upstream development beyond reading lkml by some of the more non-kernel-developer types.

    Read the article

  • LinkBuilder.BuildUrlFromExpression not working anymore in .Net 4 / VS 2010 ?

    - by Mose
    Hi, I recently migrating my ASP.Net MVC 1 application from VS.Net 2008 / C# 3.5 to VS.NET 2010 / C# 4.0. I massively used a builder to get URL strings from the strongly typed calls. It looks like this : // sample call : string toSamplePage = Url.To<SampleController>(c => c.Page(parameter1, parameter2)); the code is added as an extension to the default UrlHelper : public static string To<Tcontroller>(UrlHelper helper, Expression<Action<Tcontroller>> action) where Tcontroller : Controller { // based on Microsoft.Web.Mvc.dll LinkBuilder return LinkBuilder.BuildUrlFromExpression<Tcontroller>(helper.RequestContext, helper.RouteCollection, action); } The only problem of this, is the reference to Microsoft.Web.Mvc dll, but the gain in readability was worth it. Problem : it does not work anymore, return (null) whatever the parameters. Questions : is there a better way now to build links from an expression ? (yes I tried to google it without success) is there a trick to have the former LinkBuilder.BuildUrlFromExpression works ? I tried to recompile it into C# 4.0, but the problem is that it implies working on my own compilated version of System.Web.Mvc which is not an option. I'm currently trying to migrate to MVC 2 but I still have issues... Waiting for your suggestions...

    Read the article

  • How to resolve fatal error LNK1000: Internal error during IncrBuildImage?

    - by Roman Kagan
    I am trying to recompile solution file for memcached project on Windows 7 64 bit with Visual Studio 2008 and got the following error: 1>LINK : fatal error LNK1000: Internal error during IncrBuildImage 1> Version 9.00.21022.08 1> ExceptionCode = C0000005 1> ExceptionFlags = 00000000 1> ExceptionAddress = 001FFCF7 (00180000) "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin\link.exe" 1> NumberParameters = 00000002 1> ExceptionInformation[ 0] = 00000000 1> ExceptionInformation[ 1] = 011BD670 1>CONTEXT: 1> Eax = 400DA210 Esp = 0014EADC 1> Ebx = 4000815C Ebp = 0014EB04 1> Ecx = 011BD670 Esi = 400DA098 1> Edx = 0014EAF4 Edi = 0018D6C0 1> Eip = 001FFCF7 EFlags = 00010246 1> SegCs = 00000023 SegDs = 0000002B 1> SegSs = 0000002B SegEs = 0000002B 1> SegFs = 00000053 SegGs = 0000002B 1> Dr0 = 00000000 Dr3 = 00000000 1> Dr1 = 00000000 Dr6 = 00000000 1> Dr2 = 00000000 Dr7 = 00000000

    Read the article

  • How do you clean Core Data generated models and code from a project?

    - by Hazmit
    I'm having an extremely annoying problem with Core Data in the iPhone SDK. I would say in general Core Data for the most part appears easy to use and nice to implement. I have a sqlite database that is being used as a read only reference to pull data elements out for an iPhone app. It would seem there are really mysterious issues relating to what seems to be migration of the database to the most recent versions of my schema. Why can't you just clean out your stored objects and models and let a project redo all of it when you compile next? You would think if you setup a stored object model there would be a way to just reset it and recompile. I've tried what feels like a thousand 'tips' that have been the results of hours of google searches and documentation prowling to figure out how to do this. My most recent error during compile time is below. 2010-04-07 18:23:51.891 PE[1962:207] * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'Can't merge models with two different entities named 'PElement'' All of this code has been working in the simulator and is only causing me troubles now because I made a change to the schema. I also have the database options for automatically migrating set as below. NSMutableDictionary *optionsDictionary = [NSMutableDictionary dictionary]; [optionsDictionary setObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption]; [optionsDictionary setObject:[NSNumber numberWithBool:YES] forKey:NSInferMappingModelAutomaticallyOption];

    Read the article

  • A layout for maven project with a patched dependency

    - by zamza
    Suppose, I have an opensource project that depends on some library, that must be patched in order to fix some issues. How do I do that? My ideas are: Have that library sources set up as a module, keep them in my vcs. Pros: simple. Cons: some third party sources in my repo, might slow down build process, hard to find a patched place (though can be fixed in README) Have a module, like in 1, but keep patched source files only, compile them with orignal library jar in classpath and somehow replace *.class files in library jar on build. Pros: builds faster, easy to find patched places. Cons: hard to configure, that jar hackery is non-obvious (library jar in repository and in my project assembly would be different) Keep patched *.class files in main/resources, and replace on packaging like in 2). Pros: almost none. Cons: binaries in vcs, hard to recompile a patched class as patch compilation is not automated. One nice solution is to create a distinct project with patched library sources, and deploy it on local/enterprise repository with -patched qualifier. But that would not fit for an opensourced project that is meant to be easily buildable by anyone who checks out its sources. Or should I just say "and also, before you build my project, please check out that stuff and run mvn install".

    Read the article

  • Unable to change the value of the variable

    - by Legend
    I'm using a discrete event simulator called ns-2 that was built using Tcl and C++. I was trying to write some code in TCL: set ns [new Simulator] set state 0 $ns at 0.0 "puts \"At 0.0 value of state is: $state\"" $ns at 1.0 "changeVal" $ns at 2.0 "puts \"At 2.0 values of state is: $state\"" proc changeVal {} { global state global ns $ns at-now "set state [expr $state+1]" puts "Changed value of state to $state" } $ns run Here's the output: At 0.0 value of state is: 0 Changed value of state to 0 At 2.0 values of state is: 0 The value of state does not seem to change. I am not sure if I am doing something wrong in using TCL. Anyone has an idea as to what might be going wrong here? EDIT: Thanks for the help. Actually, ns-2 is something over which I do not have much control (unless I recompile the simulator itself). I tried out the suggestions and here's the output: for the code: set ns [new Simulator] set state 0 $ns at 0.0 "puts \"At 0.0 value of state is: $state\"" $ns at 1.0 "changeVal" $ns at 9.0 "puts \"At 2.0 values of state is: $state\"" proc changeVal {} { global ns set ::state [expr {$::state+1}] $ns at-now "puts \"At [$ns now] changed value of state to $::state\"" } $ns run the output is: At 0.0 value of state is: 0 At 1 changed value of state to 1 At 2.0 values of state is: 0 And for the code: set ns [new Simulator] set state 0 $ns at 0.0 "puts \"At 0.0 value of state is: $state\"" $ns at 1.0 "changeVal" $ns at 9.0 "puts \"At 2.0 values of state is: $state\"" proc changeVal {} { global ns set ::state [expr {$::state+1}] $ns at 1.0 {puts "At 1.0 values of state is: $::state"} } $ns run the output is: At 0.0 value of state is: 0 At 1.0 values of state is: 1 At 2.0 values of state is: 0 Doesn't seem to work... Not sure if its a problem with ns2 or my code...

    Read the article

  • Is it possible to use Indy 10.5.8.0 with Delphi XE?

    - by jachguate
    The case I'm trying to update the INDY to the latest version for my Delphi XE (Update 1), so I downloaded the latest INDY10 file (Indy_4545.zip) from indy.fulgan.com/ZIP. The packages compiles successfully and I can now even see the new version 10.5.8.0 on the about box dialog, but after a IDE restart I got a message saying: No se encuentra el punto de entrada del procedimiento @Idhttp@TIdCustomHTTP@GetRequestHeaders$qqrv en la biblioteca de vínculos dinámicos IndyProtocols150.bpl. My free translation to English: Entry point not found for procedure @Idhttp@TIdCustomHTTP@GetRequestHeaders$qqrv not found on the dynamic link library IndyProtocols150.bpl. After a quick comparision of old and new IdHTTP.pas I found a lot of changes on the TIdCustomHttp class, including the rename of some methods GetResponseHeaders to GetResponse GetRequestHeaders to GetRequest SetRequestHeaders to SetRequest Along with changed public/published method firms in this and other and classes interfaces. After the update, I got a lot of packages failing to load, including dclcxPivotGridOLAPD15.bpl, which in turns depends on dclDataSnapServer150.bpl which encounters the missing method on the bpl. AFAIK I can't recompile the dclDataSnapServer150.bpl (and maybe other failing packages, I just stopped here). DataSnap and DevExpress support on the IDE is a must for my day to day so The questions Is there a safe pre-established path to update to the newest INDY for Delphi XE? If not, am I on the safe side by just parching the source code by creating the old public methods and call the new ones on the implementation part? am I missing something else or am I really stuck with INDY 10.5.7 until the next Delphi minor/major release?

    Read the article

  • Oracle JDBC intermittent Connection Issue

    - by Lipska
    I am experiencing a very strange problem This is a very simple use of JDBC connecting to an Oracle database OS: Ubuntu Java Version: 1.5.0_16-b02 1.6.0_17-b04 Database: Oracle 11g Release 11.1.0.6.0 When I make use of the jar file JODBC14.jar it connects to the database everytime When I make use of the jar file JODBC5.jar it connects some times and other times it throws an error ( shown below) If I recompile with Java 6 and use JODBC6.jar I get the same results as JODBC5.jar I need specific features in JODB5.jar that are not available in JODBC14.jar Any ideas Error Connecting to oracle java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:74) at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:110) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:171) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:227) at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:494) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:411) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:490) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:202) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:474) at java.sql.DriverManager.getConnection(DriverManager.java:525) at java.sql.DriverManager.getConnection(DriverManager.java:171) at TestConnect.main(TestConnect.java:13) Code Below is the code I am using import java.io.; import java.sql.; public class TestConnect { public static void main(String[] args) { try { System.out.println("Connecting to oracle"); Connection con=null; Class.forName("oracle.jdbc.driver.OracleDriver"); con=DriverManager.getConnection( "jdbc:oracle:thin:@172.16.48.100:1535:sample", "JOHN", "90009000"); System.out.println("Connected to oracle"); con.close(); System.out.println("Goodbye"); } catch(Exception e){e.printStackTrace();} } }

    Read the article

  • Strange VS2005 compile errors: unable to locate resource file (because the compiler keeps deleting i

    - by Velika
    I AM GETTING THE FOLLOWING ERROR IN A VERY SIMPLE CLASS LIBRARY: Error 1 Unable to copy file "obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources" to "obj\Debug\SMIT.SysAdmin.BusinessLayer.SMIT.SysAdmin.BusinessLayer.Resources.resources". Could not find file 'obj\Debug\SMIT.SysAdmin.BusinessLayer.Resources.resources'. SMIT.SysAdmin.BusinessLayer Going to the Project Properties-Resource tab, I see that I defined do resources. Still, I tried to delete the resource file and recreate by going to the resource tab. When I recompile, I still get the same error. Why is it even looking for a resource file? I define no resources on teh project properties tab and added no new resource file items. Any suggestions of things to try? Update: I found the missing file in an old backup. I copied it to the location where the compiler expected it, and then successfully recompiled the project that previously had compile time errors. However, when I rebuild the entire solution, it deletes the file that I previously restored and I'm back to where I started. My solution contains several projects (maybe 10 or so). Could VS 2005 be having a problem determining dependencies and the proper order to compile these projects?

    Read the article

  • Weird error running com-exposed assembly

    - by Bernabé Panarello
    I am facing the following issue when deploying a com-exposed assembly to my client's. The COM component should be consummed by a vb6 application. Here's how it's done 1) I have one c# project which has a class with a couple of methods exposed to COM 2) The project has references to multiple assemblies 3) I compile the project, generating a folder (named dllcom) that contains the assembly plus all the referenced dlls 4) I include in the folder a .bat which does the following: regasm /u c:\dllcom\LibInsertador.dll del LibInsertador.tlb regasm c:\dllcom\LibInsertador.dll /tlb:c:\dllcom\LibInsertador.tlb /codebase c:\dllcom\ pause 5) After running the bat locally in many workstations of my laboratory, i'm able to consume the generated tlb from my vb6 application without any problems. I'm even able to update the dll by only means of running this bat, without having to recompile the vb6 application. I mean that im not having issues of vb6 fiding and invoking the exposed com object. The problem 6) I send the SAME FOLDER to my client 7) They execute the .bat locally, without any errors 8) They execute the vb6 application, vb6 finds the main assembly, the .net code seems to run correctly (it's even able to generate a log file) until it has to intantiate it's first referenced assembly. Then, they get the following exception: "Could not load type 'GYF.Common.TypeBuilder' from assembly 'GYF_Common, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'." Where "GYF.Common" is an assembly referenced by LibInsertador and TypeBuilder is a class contained in GYF.Common. GYF.Common is not a signed assembly and it's not in the GAC, just in the same folder with Libinsertador. According to .net reflector, the version is correct. ¿Any ideas about what could be happening?

    Read the article

  • How to eliminate Unhandled Exception dialog produced by 3rd party application

    - by Tappen
    I'm working with a 3rd party executable that I can't recompile (vendor is no longer available). It was originally written under .Net 1.1 but seems to work fine under later versions as well. I launch it using Process.Start from my own application (I've tried p/invoke CreateProcess as well with the same results so that's not relevant) Unfortunately this 3rd party app now throws an unhandled exception as it exits. The Microsoft dialog box has a title like "Exception thrown from v2.0 ... Broadcast Window" with the version number relating to the version of .Net it's running under (I can use a .exe.config file to target different .Net versions, doesn't help). The unhandled exception dialog box on exit doesn't cause any real problems, but is troubling to my users who have to click OK to dismiss it every time. Is there any way (a config file option perhaps) to disable this dialog from showing for an app I don't have the source code to? I've considered loading it in a new AppDomain which would give me access to the UnhandledException event but there's no indication I could change the appearence of the dialog. Maybe someone knows what causes the exception and I can fix this some other way?

    Read the article

  • Reasons of getting a java.lang.VerifyError

    - by JeroenWyseur
    I'm investigating the following java.lang.VerifyError java.lang.VerifyError: (class: be/post/ehr/wfm/application/serviceorganization/report/DisplayReportServlet, method: getMonthData signature: (IILjava/util/Collection;Ljava/util/Collection;Ljava/util/HashMap;Ljava/util/Collection;Ljava/util/Locale;Lorg/apache/struts/util/MessageRe˜̴MtÌ´MÚw€mçw€mp:”MŒŒ at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2357) at java.lang.Class.getConstructor0(Class.java:2671) It occurs when the jboss server in which the servlet is deployed is started. It is compiled with jdk-1.5.0_11 and I tried to recompile it with jdk-1.5.0_15 without succes. That is the compilation runs fine but when deployed, the java.lang.VerifyError occurs. When I changed the methodname and got the following error: java.lang.VerifyError: (class: be/post/ehr/wfm/application/serviceorganization/r eport/DisplayReportServlet, method: getMD signature: (IILjava/util/Collection;Lj ava/util/Collection;Ljava/util/HashMap;Ljava/util/Collection;Ljava/util/Locale;L org/apache/struts/util/MessageResources-á+ÿ+àN|+ÿ+àN+Üw-Çm+ºw-ÇmX#+ûM|X+öM at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2357 at java.lang.Class.getConstructor0(Class.java:2671) at java.lang.Class.newInstance0(Class.java:321) at java.lang.Class.newInstance(Class.java:303) You can see that more of the method signature is shown. The actual method signature is private PgasePdfTable getMonthData(int month, int year, Collection dayTypes, Collection calendarDays, HashMap bcSpecialDays, Collection activityPeriods, Locale locale, MessageResources resources) throws Exception { I already tried looking to it with javap and that gives the method signature as it should be. When my other colleagues check out the code, compile it and deploy it, they have the same problem. When the build server picks up the code and deploys it on development or testing environments (HPUX), the same error occurs. Also an automated testing machine running ubuntu shows the same error during server startup. The rest of the application runs ok, only that one servlet is out of order. Any ideas where to look would be helpful.

    Read the article

  • Repackaging Jasper-Reports into an application specific OSGi bundle, legal or not?

    - by Chris
    Hi, I wanted to ask (probably a silly) question regarding the packaging of existing open-source components as OSGi bundles (more specifically Jasper Reports). I have an application that I am converting from a monolithic jar-hell type architecture to something more moduler and OSGi is my weapon of choice. There are various modules I have in mind but one of the modules is a reporting module. My own reporting module will be a jar file containing my code that should reference a Jasper Reports bundle. Trouble is, Jasper reports depends on far far too many libraries and is quite monolithic in its own right. I therefore wish to build my own Jasper Reports bundle but this is where I start getting confused about the legality of repackaging. I don't plan to re-compile but I do plan to re-bundle removing known items that I do not require. Can anyone offer advice on whether I am permitted to repackage (not recompile or extend) open-source libraries into OSGi bundles without falling foul of 'derivative works' clause of LGPL? I noticed that Groovy seems to offer some monolithic jars that include all dependancies and actually goes so far as to re-arrange the packages of its dependancies so that there are no namespace conflicts. This seems to me to be a violation of the license but if anyone can reassure me that this is legal then I would feel safer about my less intrusive custom-bundling of Jasper reports. Thanks for your time, Chris

    Read the article

  • Guaranteed way to find the ildasm.exe and ilasm.exe files regardless of .NET version/environment?

    - by m-y
    Is there a way to programmatically get the FileInfo/Path of the ildasm.exe/ilasm.exe executables? I'm attempting to decompile and recompile a dll/exe file appropriately after making some alterations to it (I'm guessing PostSharp does something similar to alter the IL after the compilation). I found a blog post that pointed to: var pfDir = Environment.GetFolderPath(Environment.SpecialFolders.ProgramFiles)); var sdkDir = Path.Combine(pfDir, @"Microsoft SDKs\Windows\v6.0A\bin"); ... However, when I ran this code the directory did not exist (mainly because my SDK version is 7.1), so on my local machine the correct path is @"Microsoft SDKs\Windows\v7.1\bin". How do I ensure I can actually find the ildasm.exe? Similarly, I found another blog post on how to get access to ilasm.exe as: string windows = Environment.GetFolderPath(Environment.SpecialFolder.System); string fwork = Path.Combine(windows, @"..\Microsoft.NET\Framework\v2.0.50727"); ... While this works, I noticed that I have Framework and Framework64, and within Framework itself I have all of the versions up to v4.0.30319 (same with Framework64). So, how do I know which one to use? Should it be based on the .NET Framework version I'm targetting? Summary: How do I appropriately guarantee to find the correct path to ildasm.exe? How do I appropriately select the correct ilasm.exe to compile?

    Read the article

  • Is there an easy way to get the Scala REPL to reload a class or package?

    - by Rex Kerr
    I almost always have a Scala REPL session or two open, which makes it very easy to give Java or Scala classes a quick test. But if I change a class and recompile it, the REPL continues with the old one loaded. Is there a way to get it to reload the class, rather than having to restart the REPL? Just to give a concrete example, suppose we have the file Test.scala: object Test { def hello = "Hello World" } We compile it and start the REPL: ~/pkg/scala-2.8.0.Beta1-prerelease$ bin/scala Welcome to Scala version 2.8.0.Beta1-prerelease (Java HotSpot(TM) Server VM, Java 1.6.0_16). Type in expressions to have them evaluated. Type :help for more information. scala> Test.hello res0: java.lang.String = Hello World Then we change the source file to object Test { def hello = "Hello World" def goodbye = "Goodbye, Cruel World" } but we can't use it: scala> Test.goodbye <console>:5: error: value goodbye is not a member of object Test Test.goodbye ^ scala> import Test; <console>:1: error: '.' expected but ';' found. import Test;

    Read the article

  • How to compile a C DLL for 64 bit with Visual Studio 2010?

    - by Daren Thomas
    I have a DLL written in C in source code. This is the code for the General Polygon Clipper (in case you are interested). I'm using it in a C# project via the C# wrapper provided on the homepage. This comes with a precompiled DLL. Since switching to a 64bit Development machine with Visual Studio 2010 and Windows 7 64 bit, the application won't run anymore. This is the error I get: An attempt was made to load a program with an incorrect format. This is because of DLLImporting the 32bit gpc.dll, as I have gathered from stuff found on the web. I assume this will all go away if I recompile the DLL to 64bit, but can't for the love of me figure out how to do so. My C skills are basic, in that I can write a C program with the GNU tools, but have no experience with various compilers / processors / IDEs etc. I believe I could port this to C#. By that I mean I trust myself to actually pull it off. But I'd prefer not to, since it is a lot of work that I'd prefer a compiler to do for me ;)

    Read the article

  • WPF not applying default styles defined in MergedDictionaries?

    - by Burgberger
    In a WPF application I defined default control styles in separate resource dictionaries (e.g. "ButtonStyle.xaml"), and added them as merged dictionaries to a resource dictionary named "ResDictionary.xaml". If I refer this "ResDictionary.xaml" as merged dictionary in my App.xaml, the default styles are not applied. However, if I refer the "ButtonStyle.xaml", it works correctly. If I recompile the same code in .NET 3.5 or 3.0, it recognizes and applies the default styles referred in "App.xaml" through "ResDictionary.xaml", but not in .NET 4.0. At runtime if I check the Application.Current.Resources dictionary, the default styles are there, but they are not applied, only if I specify the Style property explicitly in the Button control. Are there any solutions to refer a resource dictionary (containig default styles) this way in .NET 4.0? App.xaml: <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Styles/ResDictionary.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> ResDictionary.xaml: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Default/ButtonStyle.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> ButtonStyle.xaml: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Style TargetType="Button"> <Setter Property="Background" Value="Yellow"/> </Style> </ResourceDictionary>

    Read the article

  • Eclipse does not refresh project files in package explorer view

    - by EugeneP
    Today I see a strange behaviour of Eclipse 3.5.2 for the first time in 3 months. First, when I run a main function, it runs a previously compiled version. Let's say I press Ctrl+F11 in the window with an open java class and existing main function. Usually it rebuilds the class and runs a new version. Today even if there was a compile mistake, it would run fine. So I guess it does not recompile the class. Next, more strangely, if I intentionally make a mistake in the code and Eclipse underlines those lines in red, still the project Explorer does not mark them as containing errors. They remain of grey color if there were not any errors. First I did not know how to solve this problem. I tried to reopen the project, restart Eclipse and finally reboot the OS. After the tenth attempt, after rebooting, Eclipse said that all project's files are "OUT OF SYNC with the file system". When I pressed "Refresh" - F5 on a project's header name in Project Explorer it finally marked all the files with errors as containing errors and running the main function gave the desired result. An hour of my work passed and this happened again , with the other project. All the same. No marking of files as red, running no matter what old version of class with no compile errors. And since Eclipse does not tell that files are out of sync, simply pressing F5 on a project cannot help. What can you suggest?

    Read the article

  • Migration of .NET COM object to 64 bit.

    - by Victor Ronin
    Hi, We have C++ application which uses several COM object. COM object are .NET based (using COM Interop). I need to migrate application to 64 bit. I specifically need C++ application to be 64 bit. I don't want to recompile all of .NET com object to 64 bit and deliver two sets of DLL's (32 bit and 64 bit). I was investigating and found that I can load 32 bit COM Dll's in 32 bit surrogate process using (DllSurrogate in registry). I know how to do that, but it means that all COM objects will become out of process. In the C++ I had the code: CoCreateInstance(CLSID_SomeClass, NULL, CLSCTX_INPROC_SERVER, IID_SomeInterface, (void**)&pobj); It worked fine, but as soon as I switch to CLSCTX_LOCAL_SERVER (and add registry keys for DllSurrogate), it can't find interfaces (error 0x80004002). I checked registry and found out that when .NET COM DLL is registered, it adds ClsID registry keys, but doesn't add Interface and TypeLib registry key. The question is, how to create these registry keys for .NET COM? Regards, Victor

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >