Search Results

Search found 5765 results on 231 pages for 'pre compilation'.

Page 10/231 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Status of VB6/ Best Desktop Application Language with Native Compilation

    - by Sandeep Jindal
    I was looking for a Desktop Application Programming Language with one of the biggest constraint: - “I need to output as native executable”. I explored multiple options: Java is not a very good option for desktop programming, but still you can use it. But Java to Exe is a problem. Only GCJ and Excelsior-Jet provides this. .Net platform does not support native compilation. Only very few expensive tools are available which can do the job. Python is not an option for native compilation. Right? VB6 is the option I am left with. From the above list, if I am correct, VB6 is the only and probably the best option I have. But VB6 itself has issues like: It is no more under development since There are questions on support of VB6 IDE with Vista Thus my questions are: From the list of programming language options, do you want to add any more? If VB6 is good/best option, looking at its development status, would you suggest using VB6 in this era?

    Read the article

  • Pixel shader wierd compilation error

    - by ytrewq
    hi, I'm experiencing with shaders a bit and I keep getting this weird compilation error that's driving me crazy! the following pixel shader code snippet: DirectionVector = normalize(f3LightPosition[i] - PixelPos); LightVec = PixelNormal - DirectionVector; // Get the light strenght factor LightStrFactor = float(abs((LightVec.x + LightVec.y + LightVec.z) / 3.0f)); // TEST!!! LightStrFactor = 1.0f; // Add this light to the total light on this pixel LightVal += f4Light[i] * LightStrFactor; works perfectly, but as soon as i remove the "LightStrFactor = 1.0f;" line, i.e. letting 'LightStrFactor ' value be the result of the calculation above, it fails to compile the shader. LightStrFactor is a float LightVal & f4Light[i] are float4 All the rest are float3. my question is, besides why it doesn't compile, is how come DX compiler cares about the value of a float? even if my values are incorrect, shouldn't it be run-time? the shader compilation code is this: /* Compile the bitch */ if (FAILED(D3DXCompileShaderFromFile(fileName, NULL, NULL, "PS_MAIN", "ps_2_0", 0, &this->m_pCode, NULL, &this->m_constantTable))) GraphicException("Failed to compile pixel shader!"); // <-- gets here :( if (FAILED(g_D3dDevice->CreatePixelShader( (DWORD*)this->m_pCode->GetBufferPointer(), &this->m_hPixelShader ))) GraphicException("Failed to create pixel shader!"); this->m_fLoaded = true; any help is appreciated thanks!!! :]

    Read the article

  • Project management: Implementing custom errors in VS compilation process

    - by David Lively
    Like many architects, I've developed coding standards through years of experience to which I expect my developers to adhere. This is especially a problem with the crowd that believes that three or four years of experience makes you a senior-level developer.Approaching this as a training and code review issue has generated limited success. So, I was thinking that it would be great to be able to add custom compile-time errors to the build process to more strictly enforce this and other guidelines. For instance, we use stored procedures for ALL database access, which provides procedure-level security, db encapsulation (table structure is hidden from the app), and other benefits. (Note: I am not interested in starting a debate about this.) Some developers prefer inline SQL or parametrized queries, and that's fine - on their own time and own projects. I'd like a way to add a compilation check that finds, say, anything that looks like string sql = "insert into some_table (col1,col2) values (@col1, @col2);" and generates an error or, in certain circumstances, a warning, with a message like Inline SQL and parametrized queries are not permitted. Or, if they use the var keyword var x = new MyClass(); Variable definitions must be explicitly typed. Do Visual Studio and MSBuild provide a way to add this functionality? I'm thinking that I could use a regular expression to find unacceptable code and generate the correct error, but I'm not sure what, from a performance standpoint, is the best way to to integrate this into the build process. We could add a pre- or post-build step to run a custom EXE, but how can I return line- and file-specifc errors? Also, I'd like this to run after compilation of each file, rather than post-link. Is a regex the best way to perform this type of pattern matching, or should I go crazy and run the code through a C# parser, which would allow node-level validation via the parse tree? I'd appreciate suggestions and tales of prior experience.

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • Template compilation error in Sun Studio 12

    - by Jagannath
    We are migrating to Sun Studio 12.1 and with the new compiler [ CC: Sun C++ 5.10 SunOS_sparc 2009/06/03 ]. I am getting compilation error while compiling a code that compiled fine with earlier version of Sun Compiler [ CC: Sun WorkShop 6 update 2 C++ 5.3 2001/05/15 ]. This is the compilation error I get. "Sample.cc": Error: Could not find a match for LoopThrough(int[2]) needed in main(). 1 Error(s) detected. * Error code 1. CODE: #include <iostream> #define PRINT_TRACE(STR) \ std::cout << __FILE__ << ":" << __LINE__ << ":" << STR << "\n"; template<size_t SZ> void LoopThrough(const int(&Item)[SZ]) { PRINT_TRACE("Specialized version"); for (size_t index = 0; index < SZ; ++index) { std::cout << Item[index] << "\n"; } } /* template<typename Type, size_t SZ> void LoopThrough(const Type(&Item)[SZ]) { PRINT_TRACE("Generic version"); } */ int main() { { int arr[] = { 1, 2 }; LoopThrough(arr); } } If I uncomment the code with Generic version, the code compiles fine and the generic version is called. I don't see this problem with MSVC 2010 with extensions disabled and the same case with ideone here. The specialized version of the function is called. Now the question is, is this a bug in Sun Compiler ? If yes, how could we file a bug report ?

    Read the article

  • When is a webapp called Beta, alpha, pre-alpha, or none

    - by dmontain
    I've come across many apps on the web that call themselves Beta. I've come across other apps that had an alpha designation. I've even come across some that called themselves pre-alpha, whatever that means (if you know please clarify). Then I've come across some really bad webapps that shouldn't have left the developer's computer and they didn't have any beta designations. I've also seen some well built apps that called themselves Beta, including Stack Exchange (the mother site of SO) which I believe is very full featured to be called a Beta. I'm a little confused. It seems people are doing it at their whims. Is there an established rule or a checklist that can help decide what stage an app is in (beta, alpha, pre-alpha, or none)? P.S. Please feel free to retag as appropriate.

    Read the article

  • Rails, REST Architecture and HTML 5: Cross domain requests with pre-flight requests

    - by Orion
    While working on a project to make our site HTML 5 friendly, we were eager to embrace the new method for Cross Domain requests (no more posting through hidden iframes!!!). Using the Access Control specification we begin setting up some tests to verify the behaviour of various browsers. The current Rails RESTful architecture relies on the four HTTP verbs: GET, POST, PUT, DELETE. However in the Access Control spec, it dictates that non-simple methods (PUT, DELETE) require a pre-flight request using the HTTP verb OPTIONS. In addition during testing we discovered that Firefox 3.5.8 pre-flight POST requests as well. My question is this. Is anyone aware of any project for the Rails framework working to address the issue? If not, any opinions about the best strategy to support the OPTIONS method, since it has to support the routes for all the POST, PUT, DELETE methods?

    Read the article

  • Pre-generating GUIDs for use in python?

    - by rjuiaa1
    I have a python program that needs to generate several guids and hand them back with some other data to a client over the network. It may be hit with a lot of requests in a short time period and I would like the latency to be as low as reasonably possible. Ideally, rather than generating new guids on the fly as the client waits for a response, I would rather be bulk-generating a list of guids in the background that is continually replenished so that I always have pre-generated ones ready to hand out. I am using the uuid module in python on linux. I understand that this is using the uuidd daemon to get uuids. Does uuidd already take care of pre-genreating uuids so that it always has some ready? From the documentation it appears that it does not. Is there some setting in python or with uuidd to get it to do this automatically? Is there a more elegant approach then manually creating a background thread in my program that maintains a list of uuids?

    Read the article

  • How best to pre-install OR pre-load OR cache Java Script library to optimize performance.

    - by Kabeer
    Hello. I am working for an intranet application. Therefore I have some control on the client machines. The Java Script library I am using is somewhat big in size. I would like to pre-install OR pre-load OR cache the Java Script library on each machine (each browser as well) so that it does not travel for each request. I know that browsers do cache a Java Script library for subsequent requests but I would like the library to be cached once so all subsequent requests, sessions and users. What is the best mechanism to achieve this?

    Read the article

  • How best to pre-install OR pre-load OR cache JavaScript library to optimize performance?

    - by Kabeer
    Hello. I am working for an intranet application. Therefore I have some control on the client machines. The JavaScript library I am using is somewhat big in size. I would like to pre-install OR pre-load OR cache the JavaScript library on each machine (each browser as well) so that it does not travel for each request. I know that browsers do cache a JavaScript library for subsequent requests but I would like the library to be cached once for all subsequent requests, sessions and users. What is the best mechanism to achieve this?

    Read the article

  • Core Data Relationships in pre-populated SQLite database

    - by Cardinal
    Hi All, I'm new to Core Data. Currently I have following tables on hand: tbl_teahcer tbl_student tbl_course tbl_student_course_map ----------- ----------- ---------- ---------------------- teacher_id student_id course_id student_id name name name course_id teahcer_id And I'm going to make the xcdatamodel as below: Course Teacher ------ ------- name name teacher <<----------> courses students <<---| | Student | ------- | name |----->> courses My questions are as follows: As I'd like to create TableView for Cource Entity, is it a must to create the Inverse Relationship from Teacher to Course, and Student to Course? What is the beneit for having the Inverse Relationship? I got some pre-defined data on hand, and I'd like to create a SQLite storage for pre-populated source. How can I set up the relationships (both directions) in SQLite? Thank you for your help! Regards, Cardinal

    Read the article

  • Subversion pre-commit hook to clean XML from WebDAV autocommit client

    - by rjmunro
    I know that it isn't normally safe to modify a commit from a pre-commit hook in Subversion because SVN clients will not see the version that has been committed, and will cache the wrong thing, but I'd like to clean the code from a versioning-naïve WebDAV client that won't keep a local cached copy. The idea is that when I look at the repository with an SVN client, the diffs are clean. The client, by the way is MS Word, using 2003 XML format files. We're already using this format in a WebDAV system, but we'd like to add a versioning capability for expert users. Everywhere I look for documentation on how to modify the code in a pre-commit hook, I get the answer "Don't do this", not the answer "Here's how to do this, but it's reccomeded you don't", so I can't even easily try it to see if it's going to cause me problems.

    Read the article

  • Calcualting Optimal Site to Site Routing using pre-computed times between sites

    - by Idistic
    Assume that I have a number of sites (locations) and the time it takes to travel from each site to each site is pre-computed Example Data - Site to Site Pre-Calculated Times in minutes From Start Site To A 20 To B 15 To C 15 From Site A To B 10 To C 15 Site B To A 10 To C 20 Site C To A 15 To B 20 4 Sites is fairly simple, but what if the site set was say 1000 sites? Given a large site set What would the best approach be to quickly find the optimal routes from the start site while visiting every other site just once? Route Solutions from Start Site for 3 sites from a start site 1 A(20) B(10) C(20) = 50 2 A(20) C(15) B(20) = 55 3 B(15) A(10) C(15) = 40 4 B(15) C(20) A(15) = 50 5 C(15) A(15) B(10) = 40 6 C(15) B(20) A(10) = 45

    Read the article

  • Ant task to pre-compile JSPs on weblogic server

    - by user24560
    I am trying to create an ant task to compile JSPs. Here are the excerpts from the build.xml related to the task: .... <fileset dir="${java.home}/lib"> <include name="tools.jar"/> </fileset> <java classname="weblogic.jspc" fork="yes"> <classpath refid="weblogic.jsp.classpath" /> <sysproperty key="weblogic.jsp.windows.caseSensitive" value="false"/> <arg line="-forceGeneration -keepgenerated -compileAll -webapp ${jsp.src.dir} -d ${jsp.generated.src.dir}"/> </java> When I try to run wl.jsp.generate task, I get: wl.jsp.generate: [java] [jspc] warning: expected file /WEB-INF/web.xml not found, tag libraries cannot be resolved. [java] [jspc] Overriding default descriptor option 'keepgenerated' with value specified on command-line 'true' [java] Exception encountered while compiling C:\workspace\smcmw\smcmw_browser\jsp\smcesearchprogress.jsp [java] java.lang.NoSuchMethodError: javax.servlet.jsp.tagext.TagAttributeInfo.(Ljava/lang/String;ZLjava/lang/String;ZZLjava/lang/String;ZZLjava/lang/String;Ljava/lang/String;)V [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.<init>(TagAttrInfoEx.java:64) [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.<init>(TagAttrInfoEx.java:57) [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.<init>(TagAttrInfoEx.java:41) [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.read(TagAttrInfoEx.java:86) Looks like it fails because it can't find WEB-INF/web.xml file and tag libraries. How can I fix this?

    Read the article

  • SASL (Postfix) authentication with MySQL and Blowfish pre-encrypted passwords

    - by webo
    I have a Rails app with the Devise authentication gem running user registration and login. I want to use the db table that Devise populates when a user registers as the table that Postfix uses to authenticate users. The table has all the fields that Postfix may want for SASL authentication except that Devise encrypts the password using Blowfish before placing it in the database. How could I go about getting Postfix/SASL to decrypt those passwords so that the user can be authenticated properly? Devise salts the password so I'm not sure if that helps. Any suggestions? I'd likely want to do something similar with Dovecot or Courier, I'm not attached to one quite yet.

    Read the article

  • Pre-startup segmentation fault with ptrace

    - by sfink
    I have somehow managed to mangle my computer so that any time I attempt to use something that uses ptrace to trace another process (eg strace, gdb), I get an immediate segmentation fault. For example: # strace /bin/true execve("/bin/true", ["/bin/true"], [/* 27 vars */]) = 0 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ or with gdb: # gdb /bin/true GNU gdb Fedora (6.8-27.el5) Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu"... (no debugging symbols found) (gdb) run Starting program: /bin/true Program terminated with signal SIGSEGV, Segmentation fault. The program no longer exists. You can't do that without a process to debug. rpm -V comes up clean on strace, gdb, and glibc. I do not have any LD_* variables set, and PATH has nothing special in it.

    Read the article

  • Troubleshooting a high SQL Server Compilation/Batch-Ratio

    - by Sleepless
    I have a SQL Server (quad core x86, 4GB RAM) that constantly has almost the same values for "SQLServer:SQL Statistics: SQL compilations/sec" and "SQLServer:SQL Statistics: SQL batches/sec". This could be interpreted as a server running 100% ad hoc queries, each one of which has to be recompiled, but this is not the case here. The sys.dm_exec_query_stats DMV lists hundreds of query plans with an execution_count much larger than 1. Does anybody have any idea how to interpret / troubleshoot this phenomenon? BTW, the server's general performance counters (CPU,I/O,RAM) all show very modest utilization.

    Read the article

  • Seeing a pre-logon app's GUI after logon (or ever)

    - by JimB
    I'm looking for either a method to achieve this, or a clear reason why it's not possible. I use Scheduled Tasks to start an app with a GUI at system startup. I want to see that GUI's screen after logon without restarting it. I'm willing to type a password and/or re-logon and/or or use whatever app or tool to help, including changing the way I run the GUI app. It just can't wait for a user logon to start. How do I do it? Or if it's absolutely impossible, why? I've read about "Shatter attacks" but that doesn't seem to cover this. I'm most interested in XP and Windows7. If multiple solutions exist, of course I'd prefer the most convenient, flexible and/or open source.

    Read the article

  • Squeeze/Lenny compilation : Library Link error

    - by PEZ
    I've got a problem here : I have a C++ Library ("DataTsBroad") and a C++ Test app ("DataTsBroadTest"), to test it. Actually, the Lib and the Test app are both compiled an a Debian Lenny. Now, i want to continue to compile my Test app on a Debian Lenny (customer constraint), but i would compile my lib on a Squeeze or a Wheezy to work on the last Debian releases. So, i successfully compiled my Lib on a Squeeze, But, after, when i try to compile my Test app with this Lib on the Lenny, it fails ! There is a Link Error : Linking CXX executable DataTsBroadTest /home/nis/pezierg/test/ProductMak/Export/DataTsBroad/L64/Release/libDataTsBroad64.so: undefined reference to `std::ctype::_M_widen_init() const@GLIBCXX_3.4.11' collect2: ld returned 1 exit status make[2]: *** [DataTsBroadTest] Error 1 make[1]: *** [CMakeFiles/DataTsBroadTest.dir/all] Error 2 make: * [all] Error 2 The problem is certainly due to ostream C++ Lib, I tried to comment all it's uses in my Lib and it works. But how can i really fix the problem ?

    Read the article

  • Using a pre-existing function for a new row

    - by Jonathan Kushner
    I have an Excel document that contains X columns and N number of rows. The very last column of a row performs a SUM of the first X-1 columns. The problem I have is, the user of this Excel document progressively adds rows to the document, and because of this, the function does not exist yet in the last column for new rows. I need a way to have this function exist in new rows dynamically (the user is not Excel-savvy and doesn't have the ability to just drag the function down a row).

    Read the article

  • How can I pre-sign puppet certificates?

    - by Ranguard
    Puppet requires certificates between the client (puppet) being managed and the server (puppetmaster). You can run manually on the client and then go onto the server to sign the certificate, but how do you automate this process for clusters / cloud machines?

    Read the article

  • Time Machine (archive) of pre-Leopard systems?

    - by benc
    I want to get off an older Mac OS X system permanently. It is an iBook G3, so that has two important characteristics: Power PC, not Intel based. Runs only Tiger, not Leopard. This means, as far as I can tell: Cannot run Time Machine directly. Here's the approach I have been contemplating: Mount the drive in Firewire mode. Back up the drive as a external drive to the Time Machine volume. Disconnect the drive (permanently). However, I'm concerned that this drive will eventually age out, when the Time Machine volume fills up, and the old-system-as-external-drive is gone. Would it be better to do a single backup with another utility, to shared disk?

    Read the article

  • How to set the IP address in a customized OpenWRT compilation

    - by Berdus
    I have been struggling today customizing OpenWRT. I checkout the stable using SVN, "make menuconfig" to customize the image, "make" it and run it on a router. Almost all my modifications work, except for the (Seemingly trivial) task of changing the default 192.168.1.1 address. I tried numerous files (scripts as well as config files) but I can't seem to change it (I can change it for a brief moment after boot using the "preinit" file, but after a few seconds it reverts to default). I suspect I should be setting it in the /etc/network file, but modifications there seem to be overwritten during boot. Maybe it has something to do with the br-lan interface? Does anybody have some thoughts on the subject? Thanks!

    Read the article

  • Building Debian package - pre-installation check

    - by debianuser
    I am building a custom Debian package and during the installation, I would like the package to check for available diskspace and only proceed if there is enough space. I would like this to happen during re-install as well. I tried adding bash code to do the check in preinst and prerm file(for re-installations) with exit 1 if the check fails, but the installer went ahead and copied files and I end up with a broke installation. What am I doing wrong?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >