Search Results

Search found 3459 results on 139 pages for 'if modified since'.

Page 100/139 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • DB2 Child Table Not Working - Create Table

    - by gamerzfuse
    I have a bit of a task before me. (DB2 Database) I need to create a table that will be a child table (is that what it is called in SQL?) I need it so that it has a foreign key constraint with my other table, so when the parent table is modified (record deleted) the child table also loses that record. Once I have the table, I also need to populate it with the data from the other table (if there is an easy way to UPDATE this). If you could point me in the right direction, this would help alot, as I do not even know what syntax to look for. Thanks in advance The table I have in place: create table titleauthors ( au_id char(11), title_id char(6), au_ord integer, royaltyshare decimal(5,2)); The table I am creating: create table titles ( title_id char(6), title varchar(80), type varchar(12), pub_id char(4), price decimal(9,2), advance decimal(9,2), ytd_sales integer, contract integer, notes varchar(200), pubdate date); I need the title_id to be matched with the title_id from the parent table AND use the ON DELETE CASCADE syntax to delete when that table is deleted from. My Attempt: CREATE TABLE BookTitles ( title_id char(6) NOT NULL CONSTRAINT BookTitles_title_id_pk REFERENCES titleauthors(title_id) ON DELETE CASCADE, title varchar(80) NOT NULL, type varchar(12), pub_id char(4), price decimal(9,2), advance decimal(9,2), ytd_sales integer, contract integer, notes varchar(200), pubdate date) ; Thanks in advance!

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • Multi-accordion help (CSS issue maybe?)

    - by Josh
    So, I've been trying to develop this multi-accordion news section for this site. It's actually all working, thanks to an insightful plugin. I've modified it a little bit so it works as I want it to, but I've run into two issues, one which is possibly CSS. Issue #1: The idea for the user is that when they view this page, they see all the recent headlines. They can also see who it has been posted by and how many comments have been made to this article. If they wish, they can then click on the headline and the field will expand into the article. They can then either make a comment or view the comments via clicking the View Comments link or clicking the "number of comments" link in the "Posted by..." area (a shortcut to the comments basically). The problem I'm having is if I make the AUTHOR or the "0" comments a link, it breaks the accordion because the accordion uses an A CLASS to open it up. I'm looking for a fix, I've tried making it a H1 or a DIV but that also breaks it. Issue #2: This is a pretty picky one, but when you click the headline it expands, but at least in Firefox (haven't tested it in Chrome yet) the text jumps from the right and to the left, locking in place from which the CSS tells it to (padding-left). I don't know why it's exactly doing that, if anyone has any insight on that, it'd be appreciated. A two-parter to this issue is when you open the Headline to the article and then decide to close the article by clicking on the Headline, parts of the accordion jumps from the darker purple to the light purple before the task is finished. I'm also interested fixing this, but this issue in its entirety are all pretty nit picky things. You can view the demo of the site here: http://www.notedls.com/demo Please if anyone has any advice or fixes, I'd appreciate it, I've been trying to get this all to work to the best of my ability, but I'm clearly no guru or expert. Thanks!

    Read the article

  • Browser: Cookie lost on refresh

    - by Nirmal
    I am experiencing a strange behaviour of my application in Chrome browser (No problem with other browsers). When I refresh a page, the cookie is being sent properly, but intermittently the browser doesn't seem to pass the cookie on some refreshes. This is how I set my cookie: $identifier = / some weird string /; $key = md5(uniqid(rand(), true)); $timeout = number_format(time(), 0, '.', '') + 43200; setcookie('fboxauth', $identifier . ":" . $key, $timeout, "/", "fbox.mysite.com", 0); This is what I am using for page headers: header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Thu, 25 Nov 1982 08:24:00 GMT"); // Date in the past Do you see any issue here that might affect the cookie handling? Thank you for any suggestion. EDIT-01: It seems that the cookie is not being sent with some requests. This happens intermittently and I am seeing this behaviour for ALL the browsers now. Has anyone come across such situation? Is there any situation where a cookie will not be sent with the request? Thanks again, for any guideline.

    Read the article

  • python VTE Terminal weirdness

    - by mykhal
    i'm trying to use the terminal from python VTE binding (python-vte from debian squeeze) as a virtual terminal emulator (just for ANSI/control chars text processing) in interactive python console, everything looks (almost) all right: >>> import vte >>> term = vte.Terminal() >>> term.feed("a\nb") >>> print repr(term.get_text(lambda *a: True).rstrip()) 'a\n b' however, launching this code (little modified) as python script, different result is yielded: $ python vte_wiredness_1.py '' strangely enough, pasting the code back into the (new) interactive python session also yields empty string: >>> import vte >>> term = vte.Terminal() >>> term.feed("a\nb") >>> print repr(term.get_text(lambda *a: True).rstrip()) '' >>> first thing caming on my mind was that the only difference between the two cases is the timing - there had to be some delay before get_text. unfortunately, preluding get_text with some seconds sleep did not help then i thought it has something to do with X window environment. but the results are the same pure linux console (with some warning on missing graphics). i wonder what causes such an unpredictable behavior (interactive console - pasted vs typed, and it's not the delay.. ant the interactive console has nothing to do with the vte terminal object.. i guess) can someone explain what is happening? is it possible to use the VTE Term such way? that the "b" letter in the output is preceded by the space, is another strangeness (all consecutive lines are preceded by more spaces.. looks like I have to send carriage return before the string.) (the lambda *a: True get_text method argument i'm using is a dummy callback, it's is some SlotSelectedCallback.. for its explanation i'd be grateful as well :) )

    Read the article

  • iPhone: Using a NSMutableArry in the AppDelegate as a Global Variable

    - by aahrens
    What i'm trying to accomplish is to have an NSMutableArray defined in the AppDelegate. I then have two UIViewControllers. One view is responsible for displaying the array from the AppDelegate. The other view is used to add items to the array. So the array starts out to be empty. View1 doesn't display anything because the array is empty. The User goes to View2 and adds an item to the array in AppDelegate. Then when the user goes back to View1 it now displays one item. Here is how I'm trying to accomplish this @interface CalcAppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; UITabBarController *tabBarController; NSMutableArray *globalClasses; } @property (nonatomic,retain) NSMutableArray *globalClasses; My other view In the viewDidload I set the array in my View to be the one in the AppDelegate. In an effort to retain values. allCourses = [[NSMutableArray alloc]init]; CalcAppDelegate *appDelegate = (CalcAppDelegate *)[[UIApplication sharedApplication] delegate]; allCourses = appDelegate.globalClasses; Then I would update my allCourses array by adding a new item. Then try to set the array in the AppDelegate to be equal to the modified one. CalcAppDelegate *appDel = (CalcAppDelegate *)[[UIApplication sharedApplication] delegate]; NSLog(@"Size before reset %d",[appDel.globalClasses count]); appDel.globalClasses = allCourses; NSLog(@"Size after reset %d",[appDel.globalClasses count]); What I'm seeing that's returned is 2 in the before, and 2 after. So it doesn't appear to be getting updated properly. Any suggestions?

    Read the article

  • How to cache an HTTP POST response?

    - by KARASZI István
    I would like to create a cacheable HTTP response for a POST request. My actual implementation responses the following for the POST request: HTTP/1.1 201 Created Expires: Sat, 03 Oct 2020 15:33:00 GMT Cache-Control: private,max-age=315360000,no-transform Content-Type: application/x-www-form-urlencoded; charset=UTF-8 Content-Length: 9 ETag: 2120507660800737950 Last-Modified: Wed, 06 Oct 2010 15:33:00 GMT ......... But it looks like that the browsers (Safari, Firefox tested) are not cacheing the response. In the HTTP RFC the corresponding part says: Responses to this method are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields. However, the 303 (See Other) response can be used to direct the user agent to retrieve a cacheable resource. So I think it should be cached. I know I could set a session variable and set a cookie and do a 303 redirect, but I want to cache the response of the POST request. Is there any way to do this? P.S.: I've started with a simple 200 OK, so it does not work. Thanks,

    Read the article

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

  • IIS6 compressing static files, but not JSON requests

    - by user500038
    I have IIS6 compression setup, and static content is being compressed correctly as per Coding Horror. Example of a static file: Response Headers Content-Length 55513 Content-Type application/x-javascript Content-Encoding gzip Last-Modified Mon, 20 Dec 2010 15:31:58 GMT Accept-Ranges bytes Vary Accept-Encoding Server Microsoft-IIS/6.0 X-Powered-By ASP.NET Date Wed, 29 Dec 2010 16:37:23 GMT Request Headers Accept */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive As you can see, the response is correctly compressed. However with a JSON call you'll see the request with the gzip parameter correctly set: Accept application/json, text/javascript, */*; q=0.01 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive X-Requested-With XMLHttpRequest Then for the response: Cache-Control private Content-Length 6811 Content-Type application/json; charset=utf-8 Server Microsoft-IIS/6.0 X-Powered-By ASP.NET X-AspNet-Version 2.0.50727 X-AspNetMvc-Version 2.0 No gzip! I found this article by Rick Stahl, which outlines how to compress in your code, but I'd like IIS to handle this. Is this possible with IIS6?

    Read the article

  • Multi-level shop, xml or sql. best practice?

    - by danrichardson
    Hello, i have a general "best practice" question regarding building a multi-level shop, which i hope doesn't get marked down/deleted as i personally think it's quite a good "subjective" question. I am a developer in charge (in most part) of maintaining and evolving a cms system and associated front-end functionality. Over the past half year i have developed a multiple level shop system so that an infinite level of categories may exist down into a product level and all works fine. However over the last week or so i have questioned by own methods in front-end development and the best way to show the multi-level data structure. I currently use a sql server database (2000) and pull out all the shop levels and then process them into an enumerable typed list with child enumerable typed lists, so that all levels are sorted. This in my head seems quite process heavy, but we're not talking about thousands of rows, generally only 1-500 rows maybe. I have been toying with the idea recently of storing the structure in an XML document (as well as the database) and then sending last modified headers when serving/requesting the document for, which would then be processed as/when nessecary with an xsl(t) document - which would be processed server side. This is quite a handy, reusable method of storing the data but does it have more overheads in the fact im opening and closing files? And also the xml will require a bit of processing to pull out blocks of xml if for instance i wanted to show two level mid way through the tree for a side menu. I use the above method for sitemap purposes so there is currently already code i have built which does what i require, but unsure what the best process is to go about. Maybe a hybrid method which pulls out the data, sorts it and then makes an xml document/stream (XDocument/XmlDocument) for xsl processing is a good way? - This is the way i currently make the cms work for the shop. So really (and thanks for sticking with me on this), i am just wandering which methods other people use or recommend as being the best/most logical way of doing things. Thanks Dan

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

  • Flash External Interface issue with Firefox

    - by majestiq
    I am having a hard time getting ExternalInterface to work on Firefox. I am trying to call a AS3 function from javascript. The SWF is setup with the right callbacks and it is working in IE. I am using AC_RunActiveContent.js to embed the swf into my page. However, I have modified it to add an ID to the Object / Embed Tags. Below are object and embed tag that are generated for IE and for Firefox respectively. <object codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="400" height="400" align="middle" id="jpeg_encoder2" name="jpeg_encoder3" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" > <param name="movie" value="/jpeg_encoder/jpeg_encoder3.swf" /> <param name="quality" value="high" /> <param name="play" value="true" /> <param name="loop" value="true" /> <param name="scale" value="showall" /> <param name="wmode" value="window" /> <param name="devicefont" value="false" /> <param name="bgcolor" value="#ffffff" /> <param name="menu" value="false" /> <param name="allowFullScreen" value="false" /> <param name="allowScriptAccess" value="always" /> </object> <embed width="400" height="400" src="/jpeg_encoder/jpeg_encoder3.swf" quality="high" pluginspage="http://www.macromedia.com/go/getflashplayer" align="middle" play="true" loop="true" scale="showall" wmode="window" devicefont="false" id="jpeg_encoder2" bgcolor="#ffffff" name="jpeg_encoder3" menu="false" allowFullScreen="false" allowScriptAccess="always" type="application/x-shockwave-flash" > </embed> I am calling the function like this... <script> try { document.getElementById('jpeg_encoder2').processImage(z); } catch (e) { alert(e.message); } </script> In Firefox, I get an error saying "document.getElementById("jpeg_encoder2").processImage is not a function" Any Ideas?

    Read the article

  • Multiple threads modifying a collection in Java??

    - by posdef
    Hi, The project I am working on requires a whole bunch of queries towards a database. In principle there are two types of queries I am using: read from excel file, check for a couple of parameters and do a query for hits in the database. These hits are then registered as a series of custom classes. Any hit may (and most likely will) occur more than once so this part of the code checks and updates the occurrence in a custom list implementation that extends ArrayList. for each hit found, do a detail query and parse the output, so that the classes created in (I) get detailed info. I figured I would use multiple threads to optimize time-wise. However I can't really come up with a good way to solve the problem that occurs with the collection these items are stored in. To elaborate a little bit; throughout the execution objects are supposed to be modified by both (I) and (II). I deliberately didn't c/p any code, as it would be big chunks of code to make any sense.. I hope it make some sense with the description above. Thanks,

    Read the article

  • Gzip and subprocess' stdout in python

    - by pythonic metaphor
    I'm using python 2.6.4 and discovered that I can't use gzip with subprocess the way I might hope. This illustrates the problem: May 17 18:05:36> python Python 2.6.4 (r264:75706, Mar 10 2010, 14:41:19) [GCC 4.1.2 20071124 (Red Hat 4.1.2-42)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gzip >>> import subprocess >>> fh = gzip.open("tmp","wb") >>> subprocess.Popen("echo HI", shell=True, stdout=fh).wait() 0 >>> fh.close() >>> [2]+ Stopped python May 17 18:17:49> file tmp tmp: data May 17 18:17:53> less tmp "tmp" may be a binary file. See it anyway? May 17 18:17:58> zcat tmp zcat: tmp: not in gzip format Here's what it looks like inside less HI ^_<8B>^H^Hh<C0><F1>K^B<FF>tmp^@^C^@^@^@^@^@^@^@^@^@ which looks like it put in the stdout as text and then put in an empty gzip file. Indeed, if I remove the "Hi\n", then I get this: May 17 18:22:34> file tmp tmp: gzip compressed data, was "tmp", last modified: Mon May 17 18:17:12 2010, max compression What is going on here?

    Read the article

  • How do I apply a "template" or "skeleton" of code in C# here?

    - by Scott Stafford
    In my business layer, I need many, many methods that follow the pattern: public BusinessClass PropertyName { get { if (this.m_LocallyCachedValue == null) { if (this.Record == null) { this.m_LocallyCachedValue = new BusinessClass( this.Database, this.PropertyId); } else { this.m_LocallyCachedValue = new BusinessClass( this.Database, this.Record.ForeignKeyName); } } return this.m_LocallyCachedValue; } } I am still learning C#, and I'm trying to figure out the best way to write this pattern once and add methods to each business layer class that follow this pattern with the proper types and variable names substituted. BusinessClass is a typename that must be substituted, and PropertyName, PropertyId, ForeignKeyName, and m_LocallyCachedValue are all variables that should be substituted for. Are attributes usable here? Do I need reflection? How do I write the skeleton I provided in one place and then just write a line or two containing the substitution parameters and get the pattern to propagate itself? EDIT: Modified my misleading title -- I am hoping to find a solution that doesn't involve code generation or copy/paste techniques, and rather to be able to write the skeleton of the code once in a base class in some form and have it be "instantiated" into lots of subclasses as the accessor for various properties. EDIT: Here is my solution, as suggested but left unimplemented by the chosen answerer. // I'll write many of these... public BusinessClass PropertyName { get { return GetSingleRelation(ref this.m_LocallyCachedValue, this.PropertyId, "ForeignKeyName"); } } // That all call this. public TBusinessClass GetSingleRelation<TBusinessClass>( ref TBusinessClass cachedField, int fieldId, string contextFieldName) { if (cachedField == null) { if (this.Record == null) { ConstructorInfo ci = typeof(TBusinessClass).GetConstructor( new Type[] { this.Database.GetType(), typeof(int) }); cachedField = (TBusinessClass)ci.Invoke( new object[] { this.Database, fieldId }); } else { var obj = this.Record.GetType().GetProperty(objName).GetValue( this.Record, null); ConstructorInfo ci = typeof(TBusinessClass).GetConstructor( new Type[] { this.Database.GetType(), obj.GetType()}); cachedField = (TBusinessClass)ci.Invoke( new object[] { this.Database, obj }); } } return cachedField; }

    Read the article

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

  • are there java based auto-updating tools other than WebStart & Eclipse P2?

    - by DaddyB
    Hi, I am working on a java based application and we are looking to ease our deployment of updates. Up until now, we've always simply sent out new install packs & had the sysadmin's on our customer sites roll out the upgrades - painful for a large number of users. what I'd like to do is something similar to java webstart (or eclipse p2) - when the application starts, it checks for updates in a specified location and then downloads the updates prior to starting. But here's my problem - I want more control over what's done outside of the scope of plugins & jar files. For example: I'd like to be able to upate my JVM (we ship a modified version with additional security features). I need to install DLL's - possibly local to the jar files, sometimes to windows Occasiontally run MSI's to install windows components (e.g. printer drivers). I need to modify config files & the registry. I have found a few applications that support this (such as AppLifeUpdate at http://www.kineticjump.com/) but they tend to be .NET focused and it seems a bit perverse to introduce a .NET dependancy on a java application ;) I know I could write my own here, but if there is already a 3rd party library out there that supports this kind of facility, then it would make my life a lot easier. So, has anyone else had a similar problem & knows of some products I could look at? Thanks, Brian.

    Read the article

  • C++ : Avoid lot of boolean variable for multiple verification conditions in trading app

    - by Naveen
    Hi i am a junior dev in trading app... we have a order refresh verification unit. It has to verify order confirmation from exchange. We send a bunch of different request in bulk ( NEW, MODIFY, CANCEL ) to exchange... Verification has to happen for max N times with each T intervals for all orders. if verification successful for all the order before N retry then fine.. otherwise we need to indicate as verification unsuccessfull. i hv done a basic coding done in very urgent like below for( N times ) { for_each ( sent_request_order ) // SENT { 1) get all the refreshed order from DB or shared mem i.e REFRESHED 2) find current sent order in REFRESHED if( not_found ) not refreshed from exchange, continue to next order if( found ) case NEW : //check for new status, mark verification done case MODIFY : //check for modified status.. //if not mark pending, go to next order, //revisit the same after T time case CANCEL : //check for cancelled status.. //if not mark pending, go to next order, //revisit the same after T time } if( all_verified ) exit from verification. wait ( T sec ) } order_verification_pending, order_verification_done, order_visited, order_not_visited, all_verified, all_not_verified ... lot of boolean flags used for indication.. is there any better approach for doing this.... splitting responsibilities across the classes......???? i know this is not a general question.... but still flags are making me tidious to handle...

    Read the article

  • PHP ZipArchive Empty in IE

    - by Jesse Bunch
    Hi, I am using PHP's ZipArchive class to create a zip file containing photos and then serve it up to the browser for download. Here is my code: /** * Grabs the order, packages the files, and serves them up for download. * * @param string $intEntryID * @return void * @author Jesse Bunch */ public static function download_order_by_entry_id($intUniqueID) { $objCustomer = PhotoCustomer::get_customer_by_unique_id($intUniqueID); if ($objCustomer): if (!class_exists('ZipArchive')): trigger_error('ZipArchive Class does not exist', E_USER_ERROR); endif; $objZip = new ZipArchive(); $strZipFilename = sprintf('%s/application/tmp/%s-%s.zip', $_SERVER['DOCUMENT_ROOT'], $objCustomer->getEntryID(), time()); if ($objZip->open($strZipFilename, ZIPARCHIVE::CREATE) !== TRUE): trigger_error('Unable to create zip archive', E_USER_ERROR); endif; foreach($objCustomer->arrPhotosRequested as $objPhoto): $filename = PhotoCart::replace_ee_file_dir_in_string($objPhoto->strHighRes); $objZip->addFile($filename,sprintf('/press_photos/%s-%s', $objPhoto->getEntryID(), basename($filename))); endforeach; $objZip->close(); header('Last-Modified: '.gmdate('D, d M Y H:i:s', filemtime($strZipFilename)).' GMT', TRUE, 200); header('Cache-Control: no-cache', TRUE); header('Pragma: Public', TRUE); header('Expires: ' . gmdate('D, d M Y H:i:s', time()) . ' GMT', TRUE); header('Content-Length: '.filesize($strZipFilename), TRUE); header('Content-disposition: attachment; filename=press_photos.zip', TRUE); header('Content-Type: application/octet-stream', TRUE); ob_start(); readfile($strZipFilename); ob_end_flush(); exit; else: trigger_error('Invalid Customer', E_USER_ERROR); endif; } This code works really well with all browsers but IE. In IE, the file downloads correctly, but the zip archive is empty. When trying to extract the files, Windows tells me that the zip archive is corrupt. Has anyone had this issue before?

    Read the article

  • NSUndoManager, Core Data and selective undo/redo

    - by Combat
    I'm working on a core data application that has a rather large hierarchy of managed objects similar to a tree. When a base object is created, it creates a few child objects which in turn create their own child objects and so on. Each of these child objects may gather information using NSURLConnections. Now, I'd like to support undo/redo with the undoManager in the managedObjectContext. The problem is, if a user creates a base object, then tries to undo that action, the base object is not removed. Instead, one or more of the child objects may be removed. Obviously this type of action is unpredictable and unwanted. So I tried disabling undo registration by default. I did this by calling disableUndoRegistration: before anything is modified in the managedObjectContext. Then, enabling undo registration before base operations such as creating a base object the again re-disabling registrations afterwords. Now when i try to undo, I get this error: undo: NSUndoManager 0x1026428b0 is in invalid state, undo was called with too many nested undo groups Thoughts?

    Read the article

  • Problem with Spring security's logout

    - by uther-lightbringer
    Hello, I've got a problem logging out in Spring framework. First when I want j_spring_security_logout to handle it for me i get 404 j_spring_security_logout not found: sample-security.xml: <http> <intercept-url pattern="/messageList.htm*" access="ROLE_USER,ROLE_GUEST" /> <intercept-url pattern="/messagePost.htm*" access="ROLE_USER" /> <intercept-url pattern="/messageDelete.htm*" access="ROLE_ADMIN" /> <form-login login-page="/login.jsp" default-target-url="/messageList.htm" authentication-failure-url="/login.jsp?error=true" /> <logout/> </http> Sample url link to logout in JSP page: <a href="<c:url value="/j_spring_security_logout" />">Logout</a> When i try to use a custom JSP page i.e. I use login form for this purpose then I get better result at least it gets to login page, but another problem is that you dont't get logged off as you can diretcly type url that should be guarded buy you get past it anyway. Slightly modified from previous listings: <http> <intercept-url pattern="/messageList.htm*" access="ROLE_USER,ROLE_GUEST" /> <intercept-url pattern="/messagePost.htm*" access="ROLE_USER" /> <intercept-url pattern="/messageDelete.htm*" access="ROLE_ADMIN" /> <form-login login-page="/login.jsp" default-target-url="/messageList.htm" authentication-failure-url="/login.jsp?error=true" /> <logout logout-success-url="/login.jsp" /> </http> <a href="<c:url value="/login.jsp" />">Logout</a> Thank you for help

    Read the article

  • Can Core Data be used for objects with variable schemas?

    - by glenc
    I'm implementing a new iPhone app and am relatively new to Cocoa development overall. I am at the stage of choosing how the persistence layer of this app will work, and it looks like I'm basically choosing between Core Data and sqlite3. The persisted models in this app are intended to have a schema that is loaded at runtime (from some kind of defn file, probably XML). By which I mean, this app is intended to have objects that are user-definable to some extent, e.g. the Customer type (which has certain built-in fields like "name" and "email") can be modified to have extra fields based on the user's specific needs (e.g. a user might want to add a "favourite fruit" field to their Customer type). Having said that, will Core Data work for an app with a non-baked-in data model like this? I've just started playing around with the Core Data object designer thing in XCode and it seems like this thing wants to work with objects that have fixed fields that are compiled in. I'm definitely trying to take the path of least resistance here, and I can see the benefits of using an Apple-supplied data framework, but don't want to start down that path if it's going to lock me into a data model that's defined at compile time.

    Read the article

  • Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to

    - by Simon B.
    I have a folder with ~10 000 subfolders. Can any linux API or tool watch for any change in any folder below e.g. /SharedRoot or do I have to setup inotify for each folder? (i.e. I loose if I want to do this for 10k+ folders). I guess yes, since I've already seen examples of this inefficient method, for instance http://twistedmatrix.com/trac/browser/trunk/twisted/internet/inotify.py?rev=28866#L345 My problem: I need to keep folders time-sorted with most recently active "project" up top. When a file changes, each folder above that file should update its last-modified timestamp to match the file. Delays are ok. Opening a file (typically MS Excel) and closing again, its file date can jump up and then down again. For this reason I need to wait until after a file is closed, then queue the folder of that file for checking, and only a while later do I go and look for the newest file in its folder, since the filedate of the triggering file could already be back-dated to its original timestamp by Excel or similar programs. Also in case several files from same folder are used/created, it makes sense to buffer timestamping of that folders' parents to at least get a bunch of updates collapsed into one delayed update. I'm looking for a linux solution. I have some code that can be run on a windows server, most of the queing functionality is here: http://github.com/sesam/FolderdateFollowsFiles/blob/master/FolderdateFollowsFiles/Follower.vb Available API:s The relative of inotify on windows, ReadDirectoryChangesW, can watch a folder and its whole subtree; see bWatchSubtree on http://msdn.microsoft.com/en-us/library/aa365465(VS.85).aspx Samba? Patching samba source is a possibility, but perhaps there are already hooks available? Other possibilities, like client side (various windows versions) and spying on file activities in order to update folders recursively?

    Read the article

  • Missing files on branch after cvs2svn import

    - by cafebabe
    A colleague has imported a CVS repository into a pre-existing SVN repository using a cvs2svn dumpfile (like "svnadmin load --parent-dir /path < dumpfile") , which I originally created from the CVS repo. Now that I'm trying to checkout and build from SVN, I've noticed that some files seem to be missing in the SVN checkout that were present when I checked out the same branch from CVS, although the majority are present. They are mostly but not exclusively binary files (jars and gifs etc.) and I think (though I haven't checked exhaustively) that they are also files that have not been modified on the branch that I'm trying to check out. I should also point out that they don't show up using cvsweb (I would provide a link to the cvsweb documentation but I have no way of knowing its version etc), although they do appear doing a standard checkout of the branch. If anyone has any idea what's wrong here, or where to start looking to address this, I'd be very grateful! New to SVN so not sure if this is normal! Also, I know I could fairly easily "fix" it by copying over the files but I'd ideally like to keep their revision history so a more complete solution would be preferable. Thanks!

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >