Search Results

Search found 17285 results on 692 pages for 'incremental build'.

Page 496/692 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • snprintf and Visual Studio 2010

    - by Andrew
    I'm unfortunate enough to be stuck using VS 2010 for a project, and noticed the following code still doesn't build using the non-standards compliant compiler: #include <stdio.h> #include <stdlib.h> int main (void) { char buffer[512]; snprintf(buffer, sizeof(buffer), "SomeString"); return 0; } (fails compilation with the error: C3861: 'snprintf': identifier not found) I remember this being the case way back with VS 2005 and am shocked to see it still hasn't been fixed. Does any one know if Microsoft has any plans to move their standard C libraries into the year 2010?

    Read the article

  • Visual Studio 2008 Macro for Building does not block thread

    - by Climber104
    Hello, I am trying to write a macro for Building my application, kicking off an external tool, and ataching the debugger to that external tool. Everything is working except for the building. It builds, but it is not blocking the thread so the external tool gets kicked off before it can finish. Is there a way I can run ExecuteCommand and wait for the thread to finish? Code is below: DTE.ExecuteCommand("ClassViewContextMenus.ClassViewProject.Build") DTE.ExecuteCommand("Tools.ExternalCommand11") Try Dim dbg2 As EnvDTE80.Debugger2 = DTE.Debugger Dim trans As EnvDTE80.Transport = dbg2.Transports.Item("Default") Dim dbgeng(1) As EnvDTE80.Engine dbgeng(0) = trans.Engines.Item("Managed") Dim proc2 As EnvDTE80.Process2 = dbg2.GetProcesses(trans, "MINIPC").Item("_nStep.exe") proc2.Attach2(dbgeng) Catch ex As System.Exception MsgBox(ex.Message) End Try

    Read the article

  • Distribution profile and App Store Submission

    - by shreya
    Hi All, I developed a iPhone application for my client. I have my own developer account, so I created the Ad Hoc and App Store Distribution profiles by using my account. Now the thing is, my client want to submit the app by using his account. He has his own account. I want to know, Should I give the distribution build, which is made by using my distribution profile? OR Should I need the distribution profile created using client's account? Thanks in advance.

    Read the article

  • Robocopy for Windows 2003 doesn't support /DST option

    - by Jon
    Does anyone know if it is possible to download the latest robocopy for Windows 2003. The latest version provides the /DST option which ignores time stamps changed due to BST (British Summer Time). Every time we do a build and sync our servers when we go +1/-1 hour it takes hours instead of minutes because it sees everything as changed. I noticed it is included automatically with Vista/Win7 but the Resource toolkit that I downloaded doesn't include a new version of robocopy for Win Server 2003. If there is a place to download it from & will it also work on Windows Server 2003? Thanks.

    Read the article

  • How do you create multiple versions of an ActiveX control?

    - by Peter Ruderman
    Hopefully this is a straightforward question, but googling has proved fruitless (and frustrating, to say the least). Links to good documentation would be greatly appreciated. Here's the problem. We have a web application with an associated ActiveX control. (The control wraps a crufty old MFC application if it matters.) Moving forward, we expect to release multiple versions of this application, and each will have a corresponding version of the control. If someone accesses two versions of the web application, then that user should end up with two different versions of the control on his system. (The controls should play nice and not clobber each other.) In addition, I want to automate this process. Our system has a global version number that applies to all components. If we change the version number, the next build should produce a new version of the control. What's the best way to do this?

    Read the article

  • java jdbc connection to mysql problem

    - by fatnjazzy
    Hi, I am trying to connect to mysql from java web application in eclipse. Connection con = null; try { //DriverManager.registerDriver(new com.mysql.jdbc.Driver()); Class.forName("com.mysql.jdbc.Driver"); con = DriverManager.getConnection("jdbc:mysql://localhost/db_name","root" ,""); if(!con.isClosed()) System.out.println("Successfully connected to " + "MySQL server using TCP/IP..."); } catch(Exception e) { System.err.println("Exception: " + e.getMessage()); } finally { try { if(con != null) con.close(); } catch(SQLException e) { System.out.println(e.toString()); } } I am always getting the Exception: com.mysql.jdbc.Driver I have downloaded this jar http://forums.mysql.com/read.php?39,218287,220327 import it to the "java build path/lib" the mysql version is 5.1.3 under. running: mysql 5.1.3 (db is up and running queries form PHP) windows XP java jee Thanks

    Read the article

  • Accessing an API

    - by Paritosh Praharaj
    I'm trying to build a Rails app where I have to access the Zomato API Could anybody guide me on how to create the Controller and View. Suppose I want to get the name of the restaurant from this URL: Restaurant, how do I write the controller and views for it. Currently, I'm using HTTParty gem. This is my controller code: def restaurants @response = HTTParty.get("https://api.zomato.com/v1/restaurant.json/773", :headers => {"X-Zomato-API-Key" => "MYAPIKEY"}) puts response.body, response.code, response.message, response.headers.inspect @categories = Category.where(:category_name => 'Restaurants') end And I'm trying to get the response object in the view: <%= @response.each do |item| %> <td><%= item.id %></td> <td><%= item.name %></td> <td><%= item.url %></td> <% end %> So, this shows the following error: undefined method 'id' for ["id", 773]:Array

    Read the article

  • book with good examples for each implementation?

    - by ajsie
    i've read about design patterns and it seems that there are a lot of different design patterns to use. i wonder if there are some books that acts like a reference. "you want to build a framework, then consider this, this and this pattern". also giving some examples. then jumps to another implementation eg. search engine and gives some patterns and concrete examples to use. in this way you learn about the weakness and strength about each pattern and where they will fit, instead of just reading about every design pattern decoupled from each other. are there good "reference sheets" or other tutorials good for a beginner at this? thanks

    Read the article

  • Database schema publishing with SQL Server 2005/2008

    - by Marconline
    Hi everybody, I've a question for you. We have built a software that has a single database for each customer. These databases are managed by SQL Server 2008. Now the problem is that when we build our software we, sometimes, need to change something on the schema (like adding table, modifying existing ones etc) and migrate these updates on all the customers' databases. Now this task is accomplished by hand: we generate update scripts and then, using T-SQL, we update each database. This is ok for a small set of customer, but we are now becoming bigger and bigger and we really don't know how to face it. We found Wizardby and it seems interesting, but quite difficult for us to learn in this exact moment. Do you have any other trick? Thanks a lot, Marco

    Read the article

  • rgl.snapshot() No Longer Works

    - by bill_080
    I just upgraded R and rgl to the following versions. Now, rgl.snapshot() no longer works. It worked in previous versions. Is there a way around this? R version 2.12.1 (2010-12-16) rgl version 0.92.798 > library(rgl) > x<-rnorm(100) > y<-rnorm(100) > z<-rnorm(100) > r<-0.2 > p <- plot3d(x, y, z, axes=FALSE, box=FALSE, radius=r, type='s', + xlab="", ylab="", zlab="", col=rainbow(100)) > rgl.snapshot("C:\\Temp\\pic.png", fmt="png", top=TRUE ) Error in rgl.snapshot("C:\\Temp\\pic.png") : pixmap save format not supported in this build

    Read the article

  • Publish failed in Web Application Project (MVC)

    - by Stuck
    I usually use the "Publish" feature in VS 2008 to get the correct files and so on that I can publish to my website. But from time to time publish fails without any good error message at all. I know that it can be a couple of different reasons but I am starting to grow really tired with this now. Can anyone teach me how to build the website from the command line? As I said it is a MVC project. Can I use the aspnet_compiler for this? What parameters should I use to simulate the same behavior as a "Publish"?

    Read the article

  • Connect 4 with neural network: evaluation of draft + further steps

    - by user89818
    I would like to build a Connect 4 engine which works using an artificial neural network - just because I'm fascinated by ANNs. I'be created the following draft of the ANN structure. Would it work? And are these connections right (even the cross ones)? Could you help me to draft up an UML class diagram for this ANN? I want to give the board representation to the ANN as its input. And the output should be the move to chose. The learning should later be done using backpropagation and the sigmoid function should be applied. The engine will play against human players. And depending on the result of the game, the weights should be adjusted then.

    Read the article

  • Stack recommendations for small/medium-sized web application in Python

    - by reto
    I'm looking for some recommendations for a python web application. We have some memory restrictions and we try to keep it small and lean. We thought about using WSGI (and a python webserver) and build the rest ourself. We already have a template engine we'd like to use, but we are open for some suggestions regarding the whole request handling (the controller). The application has to run in a single process and the requests have to be processed with multiple threads. We've looked at django, but we are a not sure if it fits into our memory budget. Your feedback is very welcome! Cheers, Reto

    Read the article

  • Ask about the copyright of develop destop application based on web application

    - by Nano HE
    Hi, I googled and found some web online caculators (such as BodyFatCalculator & CaloricCalculator). I plan to develop the desktop Caculators in C#(WPF & .net 3.5). But I would test the online function and build my application module (I think some body properties not suit for asian people, maybe I still need do more research.). But now I must study others web app before my destop app design. Could I develop my app without the web application owner's permission? Thank you.

    Read the article

  • Rails: form input type and getting the filename

    - by Shyam
    Hi, As I am using Ruby on Rails to build an application, which only runs locally, I am lost in the woods (a nuby without a compass). I have a simple MVC application and my view is missing one thing I could really use. I want to select a local file just to retrieve it's filename. I know it's relatively easy to use the form tag helpers for uploading: <%= file_field 'upload', 'datafile' %></p> I wonder how I could get the filename from the selected file, without uploading the file.

    Read the article

  • TCP and UDP are using different OS Buffer?

    - by Jack
    HI all. Here is the scenario. I have port 8888 for my program to use. I build a TCP and a UDP listener on that port. (This can do, c# allows, because they are two different protocols) My question is If the network traffic is very busy, TCP sockets may refuse or signalling the other end to stop sending things, it is called congestion control, right? So if TCP is congestion controlling, other ends may not send more data, in this "TCP quiet period", UDP channel should have not that much of traffic, right? I want to figure out the TCP traffic will affect UDP traffic or not?

    Read the article

  • Inline Conditional Statement in Stored Procedure

    - by Jason
    Here is the pseudo-code for my inline query in my code: select columnOne from myTable where columnOne = '#variableOne#' if len(variableTwo) gt 0 and columnTwo = '#variableTwo#' end I would like to move this into a stored procedure but am having trouble building the query correctly. I assume it would be something like select columnOne from myTable where columnOne = @variableOne CASE WHEN len(@variableTwo) <> 0 THEN and columnTwo = @variableTwo END This is giving me a syntax error. Could someone tell me what I've got wrong. Also, I would like to keep it to only one query and not just have one if statement. Also, I do not want to build the sql in the stored procedure and run Exec() on it.

    Read the article

  • Urban Airship - not receving push notifications on device?

    - by Silent
    Hey guys just installed everything to my app tried to push but no messages appear the token for my device is saved so thats working, when i push the website says it sent but no message on device. Right now im using production certificate for push notification with a developer pervision and not the distrubution certificate. I tried cleaning the build and reinstalling but no luck since an extra app was installed from the same one. I used the code that is on the sample file tutorial i think thats all the detail dont know what to do next.

    Read the article

  • Best practices for handling string in VC++?

    - by Hiren Gujarati
    As I am new to Visual C++, there are so many types for handling string. When I use some type and go ahead with coding but on next step, there are in-build functions that use other types & it always require to convert one type of string to other. I found so many blogs but so confused when see so many answers & try but some are working & some are not. Please give your answer or links that gives ultimate solution for handling different types of strings in visual c++.

    Read the article

  • Compass accuracy dilemma

    - by mob1lejunkie
    I need to build compass for my application. From reading the documentation it seems there are two reasonable ways of doing this: Sensor.TYPE_ORIENTATION method: This is the easy way of doing it. The problem with this is it is not accurate. When I compare my reading with Snaptic Compass it is about 10-15 degress off which for my purposes is unacceptable. Sensor.TYPE_ACCELEROMETER, Sensor.TYPE_MAGNETIC_FIELD and getRotationMatrix() in conjunction with remapCoordinateSystem() and getOrientation() method: The documentation says this "is usually more accurate". The problem is regardless of the delay I register with listener the compass goes crazy even when the device is stationary on flat surface. Any suggestions for solving this problem will be greatly appreciated.

    Read the article

  • Offline mode app in a (HTML5) browser possible?

    - by Horace Ho
    Is it possible to build an application inside in browser? An application means: 1 Where there is connection (online mode) between the browser and an remote application server: the application runs in typical web-based mode the application stores necessary data in offline storage, to be used in offline mode (2) the application sync/push data (captured during offline mode) back to the server when it is resumed from offline mode back to online mode 2 Where there is no connection (offline mode) between the browser and an remote application server: the application will still run (javascript?) the application will present data (which is stored offline) to user the application can accept input from user (and store/append in offline storage) Is this possible? If the answer is a yes, is there any (Ruby/Python/PHP) framework being built? Thanks

    Read the article

  • Solr spellcheck configuration

    - by Bogdan Gusiev
    I am trying to build the spellcheck index with IndexBasedSpellChecker <lst name="spellchecker"> <str name="name">default</str> <str name="field">text</str> <str name="spellcheckIndexDir">./spellchecker</str> </lst> And I want to specify the dynamic field "*_text" as the field option: <dynamicField name="*_text" stored="false" type="text" multiValued="true" indexed="true"> How it can be done?

    Read the article

  • is PHP itself transforming into a framework?

    - by Elzo Valugi
    At the beginning PHP was a scripting language. But after the introduction and improvement of OOP I see more and more objects added to the core. They started with SPL which grew a lot, now we have DOMDocument family, DateTime family which should be part of PECL, Pear or Zend Framework or implemented by each one of us. Shouldn't be php only for build-in functions and all these objects passed to something else? Example. DateTime class is part of the core and I see it very similar with Zend_Date.

    Read the article

  • Web Service appears as website instead of developer web server

    - by stocherilac
    I recently reimaged my PC and regrabbed one of our projects from Source Safe. In our solution we have a web service that normally runs on a server, however we can build the webservice on our localhost for debugging as well. However, now whenever I grab the project from source safe it is building the webservice as a website instead of a developer web server. This is causing a variety of issue, specifically I am no longer able to specify which port I would like that webservice to use. As a result I cannot connect to our database through my local webservice. How can I change the project in my solution that controls the webservice from a website to a developer web server? MS Visual Studio 2005. MS Visual Source Safe 2005. MS SQL Server 2000. VB .NET project

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >