Search Results

Search found 2826 results on 114 pages for 'dirty flow'.

Page 53/114 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • How do you use scripting language (PHP, Python, etc) to improve your productivity?

    - by Edwin
    Hi, I'm a Delphi developer on the Windows platform, recently read the PHP tutorial at W3CSchools, it looks interesting. We all know scripting languages are very good at web site development, but I also want to utilize it to improve my productivity or get some tedious tasks done quickly, maybe some quick-and-dirty string/file processing? How do you usually do with scripting languages apart from software development? And we need a responsive, decent IDE/editor in order to gain productivity when writing scripts for this purpose? Thanks for in advance!

    Read the article

  • int vs size_t on 64bit

    - by MK
    Porting code from 32bit to 64bit. Lots of places with int len = strlen(pstr); These all generate warnings now because strlen() returns size_t which is 64bit and int is still 32bit. So I've been replacing them with size_t len = strlen(pstr); But I just realized that this is not safe, as size_t is unsigned and it can be treated as signed by the code (I actually ran into one case where it caused a problem, thank you, unit tests!). Blindly casting strlen return to (int) feels dirty. Or maybe it shouldn't? So the question is: is there an elegant solution for this? I probably have a thousand lines of code like that in the codebase; I can't manually check each one of them and the test coverage is currently somewhere between 0.01 and 0.001%.

    Read the article

  • C++ stack memory still valid?

    - by jbu
    Hi all, If I create an object on the stack and push it into a list, then the object loses scope (outside of the for loop in the example below) will the object still exist in the list? If the list still holds the object, is that data now invalid/possibly corrupt? Please let me know, and please explain the reasoning.. Thanks, jbu class SomeObject{ public: AnotherObject x; } //And then... void someMethod() { std::list<SomeObject> my_list; for(int i = 0; i < SOME_NUMBER; i++) { SomeObject tmp; my_list.push_back(tmp); //after the for loop iteration, tmp loses scope } my_list.front(); //at this point will my_list be full of valid SomeObjects or will the SomeObjects no longer be valid, even if they still point to dirty data }

    Read the article

  • C# / Winforms - Visually remove button click event

    - by Wayne Koorts
    .NET newbie alert Using Visual C# 2008 Express Edition I have accidentally created a click event for a button. I then deleted the automatically-created method code, which resulted in an error saying that the function, which had now been referenced in the form loading code, could no longer be found. Deleting the following line from the Form1.Designer.cs file's InitializeComponent() function... this.btnCopy.Click += new System.EventHandler(this.btnCopy_Click); ... seems to do the trick, however, it makes me feel very dirty because of the following warning at the beginning of the #region: /// Required method for Designer support - do not modify /// the contents of this method with the code editor. I haven't been able to find a way to do this using the form designer, which I assume is the means implied by this warning. What is the correct way to do this?

    Read the article

  • Java: Parse Australian Street Addresses

    - by bguiz
    Looking for a quick and dirty way to parse Australian street addresses into its parts: 3A/45 Jindabyne Rd, Oakleigh, VIC 3166 should split into: "3A", 45, "Jindabyne Rd" "Oakleigh", "VIC", 3166 Suburb names can have multiple words, as can street names. See: http://stackoverflow.com/questions/1739746/parse-a-steet-address-into-components Has to be in Java, cannot make http requests (e.g. to web APIs). EDIT: Assume that format specified is always followed. I have no issue with spitting incorrectly formatted strings back at the user with a message telling them to follow the format (which I've described above).

    Read the article

  • Phantom updates due to decimal precision on calculated properties

    - by Jamie Ide
    This article describes my problem. I have several properties that are calculated. These are typed as decimal(9,2) in SQL Server and decimal in my C# classes. An example of the problem is: Object is loaded with a property value of 14.9 A calculation is performed and the property value is changed to 14.90393 When the session is flushed, NHibernate issues an update because the property is dirty Since the database field is decimal (9,2) the stored value doesn't change Basically a phantom update is issued every time this object is loaded. I don't want to truncate the calculations in my business objects because that tightly couples them to the database and I don't want to lose the precision in other calculations. I tried setting scale and precision or CustomType("Decimal(9,2)") in the mapping file but this appears to only affect schema generation. My only reasonable option appears to be creating an IUserType implementation to handle this. Is there a better solution?

    Read the article

  • TooManyRowsAffectedException with encrypted triggers

    - by Jon Masters
    I'm using nHibernate to update 2 columns in a table that has 3 encrypted triggers on it. The triggers are not owned by me and I can not make changes to them, so unfortunately I can't SET NOCOUNT ON inside of them. Is there another way to get around the TooManyRowsAffectedException that is thrown on commit? Update 1 So far only way I've gotten around the issue is to step around the .Save routine with var query = session.CreateSQLQuery("update Orders set Notes = :Notes, Status = :Status where OrderId = :Order"); query.SetString("Notes", orderHeader.Notes); query.SetString("Status", orderHeader.OrderStatus); query.SetInt32("Order", orderHeader.OrderHeaderId); query.ExecuteUpdate(); It feels dirty and is not easily to extend, but it doesn't crater.

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • Using ret with FASM on Win32

    - by Jon Purdy
    I'm using SDL with FASM, and have code that's minimally like the following: format ELF extrn _SDL_Init extrn _SDL_SetVideoMode extrn _SDL_Quit extrn _exit SDL_INIT_VIDEO equ 0x00000020 section '.text' public _SDL_main _SDL_main: ccall _SDL_Init, SDL_INIT_VIDEO ccall _SDL_SetVideoMode, 640, 480, 32, 0 ccall _SDL_Quit ccall _exit, 0 ; Success, or ret ; failure. With the following quick-and-dirty makefile: SOURCES = main.asm OBJECTS = main.o TARGET = SDLASM.exe FASM = C:\fasm\fasm.exe release : $(OBJECTS) ld $(OBJECTS) -LC:/SDL/lib/ -lSDLmain -lSDL -LC:/MinGW/lib/ -lmingw32 -lcrtdll -o $(TARGET) --subsystem windows cleanrelease : del $(OBJECTS) %.o : %.asm $(FASM) $< $@ Using exit() (or Windows' ExitProcess()) seems to be the only way to get this program to exit cleanly, even though I feel like I should be able to use retn/retf. When I just ret without calling exit(), the application does not terminate and needs to be killed. Could anyone shed some light on this? It only happens when I make the call to SDL_SetVideoMode().

    Read the article

  • Call function under object from string

    - by sam
    I have a script which creates a drag-and-drop uploader on the page from a div. My DIV will look something like <div class="well uploader" data-type="image" data-callback="product.addimage" data-multi="1"></div> Then I'll have a function something like var product = new function(){ /* Some random stuff */ this.addimage = function(image){ alert('W00T! I HAZ AN IMAGE!'); } /* More random stuff */ } When the upload is complete, I need to call the function in data-callback (In this example, product.addimage). I know with global functions you can just do window[callback]() but I'm not sure the best way to do this with functions under objects. My first thought was to do something like* var obj = window; var parts = callback.split('.'); for(part in parts){ obj = obj[parts[part]]; } obj(); but that seems a bit dirty, is there a better way without using eval because eval is evil? I haven't tested this so I have no idea if it will work

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • Trying to execute netdom.exe from a ruby script or IRB does nothing

    - by Joraff
    I'm trying to write a script that will rename a computer and join it to a domain, and was planning to call on netdom.exe to do the dirty work. However, trying to run this utility in the script (same results in irb) does absolutely nothing. No output, no execution. I tried with backticks and with the system() method. System() returns false for everything but system("netdom") (which returns true). Backticks never return anything but an empty string. I have verified that netdom runs and works in the environment the script will be running in, and I'm calling other command-line utilities earlier in the script that work (w32tm, getmac, ping). Here's the exact line that gets executed: `netdom renamecomputer %COMPUTERNAME% /NewName:#{newname} /force` FYI, This is windows 7 x64

    Read the article

  • We've completed the first iteration

    - by CliveT
    There are a lot of features in C# that are implemented by the compiler and not by the underlying platform. One such feature is a lambda expression. Since local variables cannot be accessed once the current method activation finishes, the compiler has to go out of its way to generate a new class which acts as a home for any variable whose lifetime needs to be extended past the activation of the procedure. Take the following example:     Random generator = new Random();     Func func = () = generator.Next(10); In this case, the compiler generates a new class called c_DisplayClass1 which is marked with the CompilerGenerated attribute. [CompilerGenerated] private sealed class c__DisplayClass1 {     // Fields     public Random generator;     // Methods     public int b__0()     {         return this.generator.Next(10);     } } Two quick comments on this: (i)    A display was the means that compilers for languages like Algol recorded the various lexical contours of the nested procedure activations on the stack. I imagine that this is what has led to the name. (ii)    It is a shame that the same attribute is used to mark all compiler generated classes as it makes it hard to figure out what they are being used for. Indeed, you could imagine optimisations that the runtime could perform if it knew that classes corresponded to certain high level concepts. We can see that the local variable generator has been turned into a field in the class, and the body of the lambda expression has been turned into a method of the new class. The code that builds the Func object simply constructs an instance of this class and initialises the fields to their initial values.     c__DisplayClass1 class2 = new c__DisplayClass1();     class2.generator = new Random();     Func func = new Func(class2.b__0); Reflector already contains code to spot this pattern of code and reproduce the form containing the lambda expression, so this is example is correctly decompiled. The use of compiler generated code is even more spectacular in the case of iterators. C# introduced the idea of a method that could automatically store its state between calls, so that it can pick up where it left off. The code can express the logical flow with yield return and yield break denoting places where the method should return a particular value and be prepared to resume.         {             yield return 1;             yield return 2;             yield return 3;         } Of course, there was already a .NET pattern for expressing the idea of returning a sequence of values with the computation proceeding lazily (in the sense that the work for the next value is executed on demand). This is expressed by the IEnumerable interface with its Current property for fetching the current value and the MoveNext method for forcing the computation of the next value. The sequence is terminated when this method returns false. The C# compiler links these two ideas together so that an IEnumerator returning method using the yield keyword causes the compiler to produce the implementation of an Iterator. Take the following piece of code.         IEnumerable GetItems()         {             yield return 1;             yield return 2;             yield return 3;         } The compiler implements this by defining a new class that implements a state machine. This has an integer state that records which yield point we should go to if we are resumed. It also has a field that records the Current value of the enumerator and a field for recording the thread. This latter value is used for optimising the creation of iterator instances. [CompilerGenerated] private sealed class d__0 : IEnumerable, IEnumerable, IEnumerator, IEnumerator, IDisposable {     // Fields     private int 1__state;     private int 2__current;     public Program 4__this;     private int l__initialThreadId; The body gets converted into the code to construct and initialize this new class. private IEnumerable GetItems() {     d__0 d__ = new d__0(-2);     d__.4__this = this;     return d__; } When the class is constructed we set the state, which was passed through as -2 and the current thread. public d__0(int 1__state) {     this.1__state = 1__state;     this.l__initialThreadId = Thread.CurrentThread.ManagedThreadId; } The state needs to be set to 0 to represent a valid enumerator and this is done in the GetEnumerator method which optimises for the usual case where the returned enumerator is only used once. IEnumerator IEnumerable.GetEnumerator() {     if ((Thread.CurrentThread.ManagedThreadId == this.l__initialThreadId)               && (this.1__state == -2))     {         this.1__state = 0;         return this;     } The state machine itself is implemented inside the MoveNext method. private bool MoveNext() {     switch (this.1__state)     {         case 0:             this.1__state = -1;             this.2__current = 1;             this.1__state = 1;             return true;         case 1:             this.1__state = -1;             this.2__current = 2;             this.1__state = 2;             return true;         case 2:             this.1__state = -1;             this.2__current = 3;             this.1__state = 3;             return true;         case 3:             this.1__state = -1;             break;     }     return false; } At each stage, the current value of the state is used to determine how far we got, and then we generate the next value which we return after recording the next state. Finally we return false from the MoveNext to signify the end of the sequence. Of course, that example was really simple. The original method body didn't have any local variables. Any local variables need to live between the calls to MoveNext and so they need to be transformed into fields in much the same way that we did in the case of the lambda expression. More complicated MoveNext methods are required to deal with resources that need to be disposed when the iterator finishes, and sometimes the compiler uses a temporary variable to hold the return value. Why all of this explanation? We've implemented the de-compilation of iterators in the current EAP version of Reflector (7). This contrasts with previous version where all you could do was look at the MoveNext method and try to figure out the control flow. There's a fair amount of things we have to do. We have to spot the use of a CompilerGenerated class which implements the Enumerator pattern. We need to go to the class and figure out the fields corresponding to the local variables. We then need to go to the MoveNext method and try to break it into the various possible states and spot the state transitions. We can then take these pieces and put them back together into an object model that uses yield return to show the transition points. After that Reflector can carry on optimising using its usual optimisations. The pattern matching is currently a little too sensitive to changes in the code generation, and we only do a limited analysis of the MoveNext method to determine use of the compiler generated fields. In some ways, it is a pity that iterators are compiled away and there is no metadata that reflects the original intent. Without it, we are always going to dependent on our knowledge of the compiler's implementation. For example, we have noticed that the Async CTP changes the way that iterators are code generated, so we'll have to do some more work to support that. However, with that warning in place, we seem to do a reasonable job of decompiling the iterators that are built into the framework. Hopefully, the EAP will give us a chance to find examples where we don't spot the pattern correctly or regenerate the wrong code, and we can improve things. Please give it a go, and report any problems.

    Read the article

  • Ret Failure with SDL using FASM on Win32

    - by Jon Purdy
    I'm using SDL with FASM, and have code that's minimally like the following: format ELF extrn _SDL_Init extrn _SDL_SetVideoMode extrn _SDL_Quit extrn _exit SDL_INIT_VIDEO equ 0x00000020 section '.text' public _SDL_main _SDL_main: ccall _SDL_Init, SDL_INIT_VIDEO ccall _SDL_SetVideoMode, 640, 480, 32, 0 ccall _SDL_Quit ccall _exit, 0 ; Success, or ret ; failure. With the following quick-and-dirty makefile: SOURCES = main.asm OBJECTS = main.o TARGET = SDLASM.exe FASM = C:\fasm\fasm.exe release : $(OBJECTS) ld $(OBJECTS) -LC:/SDL/lib/ -lSDLmain -lSDL -LC:/MinGW/lib/ -lmingw32 -lcrtdll -o $(TARGET) --subsystem windows cleanrelease : del $(OBJECTS) %.o : %.asm $(FASM) $< $@ Using exit() (or Windows' ExitProcess()) seems to be the only way to get this program to exit cleanly, even though I feel like I should be able to use retn/retf. When I just ret without calling exit(), the application does not terminate and needs to be killed. Could anyone shed some light on this? It only happens when I make the call to SDL_SetVideoMode().

    Read the article

  • Convert mediawiki to LaTeX syntax

    - by Amit Kumar
    I need to convert mediawiki into LaTeX syntax. The formulas should stay the same, but I need to transform, for example = something = into \chapter{something}. Although this can be obtained with a bit of sed, things get a little dirty with the itemize environment, so I was wondering if a better solution can be produced. Anything that can be useful for this task ? This is the reverse of this question (graciously copied). Pandoc was the answer to that question, but probably not yet for this.

    Read the article

  • MS Chart Control for ASP.NET 100% Stacked Bar Chart Question.

    - by Jacob Huggart
    Hello All, I am trying to display a chart with several different bars that represent a ratio of some values. For example, one bar may say that there are 25 items in three different groups (maybe dirty, clean, and broken) and of those 25 items x items from each category add up to the total. Later the data will dynamically change and be displayed accordingly. But for now all I want to do is be able to display three different values on the same bar. Unfortunately, whatever properties I need to bind the data to are buried somewhere in the menus and I cannot seem to find them. Do any of you guys have experience with this sort of chart?

    Read the article

  • Using property file in hibernate mapping

    - by Zoltan Hamori
    Hi, I have a two nodes environment using the same database. In the database there is a resource table like RESOURCE_ID, CODE, NODE The content of the NODE column can be 1 or 2 depending on which node can use it. As I need to deploy the same ear to the two nodes, I would like to map this table like this: <hibernate-mapping> <class name="ResourceVO" table="RESOURCE" dynamic-update="true" optimistic-lock="dirty" where="NODE=${node.value}" > I would like to store the node.value property on the file system, so the instances could identify which resource to use. Is it possible in hibernate?

    Read the article

  • How would you start automating my job? - Part 2

    - by Jurily
    (Followup to this question) After surviving the first wave of incoming shipments (9 hours of copy/paste), I now believe I have all the requirements. Here is the updated workflow: Monkey collects email attachments (4 Excel spreadsheets, 1 PDF) Monkey creates central database, does complex calculations (right now this is also an Excel spreadsheet) Monkey sends data to two bosses, who set the retail prices independently; first one to reply wins Monkey sends order form to our other warehouses, also Excel Monkey sends spreadsheets to VIP customers, carefully sanitized and formatted (4 different discount categories) Jurily enters the data into the accounting system. I've given up on automating this part, there's too much business logic involved, and the database is a pile of sh^W legacy My question: What technologies would you use for a quick and dirty solution? I'm mostly sold on C#, but coming from a Linux/C++ background, I'm horribly confused about my choices in Microsoft-land. For bonus points: How would you redesign the whole system from the ground up? P.S. in case you were wondering, my job title is System Administrator.

    Read the article

  • How to get paperclip to delete files

    - by webdestroya
    I have a model that is using Paperclip to manage the file. After I delete the model, I obviously would like the file to be deleted as well, but I cannot seem to find out how to get the file deleted using Paperclip. I have tried self.sourcefile = nil if !sourcefile.dirty? in the before_destroy def, but that had no effect. (I want to be able to have it delete the file locally when I test, and then on S3 when I use that - So i need a pure paperclip solution) Any ideas?

    Read the article

  • Is SEO knowledge important for web developers?

    - by splattne
    Looking for some SEO (Search engine optimization) questions on Stackoverflow, I saw ambivalent reactions to these questions. Some were closed as "not programming related" or were downvoted, others were answered and got upvoted. It seems that many developers think SEO was something "dirty" or belonged in the realm of spam. IMHO designing for search engines and practising SEO techniques adds important value to the final product like, for example, a good user interface. Should SEO really be left to specialized non-programmers? Shouldn't web developers have profound SEO knowledge? Or is it okay to apply SEO as a post-development process?

    Read the article

  • For programming content, what simple-to-use-and-setup PHP based blog are the preferred ones?

    - by Johann Gerell
    I've since long wanted a place I can toss my programming related nuggets at. Every day I feel I solve something that I'll surely hit again in a not so distant future, but by then I most certainly will have forgotten about the previous solution I came up with. So I need to blog it down, quick and dirty, for my own documentation and memory's sake. Must be easy to set up and use. Must handle code syntax and highlighting gracefully for a number of languages, but mainly C# and C++. Must be PHP-based, because that's what my host supplies. I know and have used WordPress (not for code, though), but is it really what I want or need?

    Read the article

  • What's a simple way to web-ify my command-line daemon?

    - by dreeves
    Suppose I have a simple daemon type script that I run on my webserver. I run it in a terminal, with gnu screen, so I can keep an eye on it. That works fine (incidentally, I use this trick). But now suppose I'd like to make a web page where I can keep an eye on my script's output. What's the easiest way to do that? Notes: This is mainly for myself and a couple co-hackers so if websockets is the answer and it only works on Chrome or something, that's acceptable. This question is asking something similar: http://stackoverflow.com/questions/1964494/how-to-make-all-connected-browsers. But I'm hoping for a simpler, quick-and-dirty solution, and especially a general way to quickly do this for any script I might want to keep an eye on from a browswer.

    Read the article

  • Where does complexity bloat from?

    - by AareP
    Many of our design decisions are based on our gut feeling about how to avoid complexity and bloating. Some of our complexity-fears are true, we have plenty of painful experience on throwing away deprecated code. Other times we learn that some particular task isn't really that complex as we though it to be. We notice for example that upkeeping 3000 lines of code in one file isn't that difficult... or that using special purpose "dirty flags" isn't really bad OO practice... or that in some cases it's more convenient to have 50 variables in one class that have 5 different classes with shared responsibilities... One friend has even stated that adding functions to the program isn't really adding complexity to your system. So, what do you think, where does bloated complexity creep from? Is it variable count, function count, code line count, code line count per function, or something else?

    Read the article

  • Asp.Net MVC best way to update cached table

    - by Eddy Mishiyev
    There are certain tables that get called often but updated rarely. One of these tables is Departments. So to save DB trips, I think it is ok to cache this table taking into consideration that the table has very small size. However, once you cached it an issue of keeping the table data fresh occurs. So what is the best way to determine that the table is dirty and therefore requires a reload and how that code should be invoked. I look for solution that will be scalable. So updating the cache right after inserting will not work. So if one machine inserted the record all other on network should get notified to reload the cache. I was thinking for calling corresponding web service from T-SQL but don't really like the idea of consuming recourses on sql server. So what are the best practices to resolve this type of problems. Thanks in advance Eddy

    Read the article

  • Eclipse CDT printing selected C code snippets/functions

    - by Sint
    Is there a quick and dirty way to print(to dead trees) selected code (C in this case) snippets? In particular, I wanted to print about 200 lines worth of code, but print dialog only offers printing of particular pages or all pages, but not selected text! Of course, one can copy and paste into another editor, but that seems rather harsh. Also, one can output the whole shebang to .pdf but that again seems a way of doing things wrong. Perhaps there is a better way? System: Ubuntu 10.04, Eclipse 3.5 with CDT, Subversive plugin

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >