Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 848/931 | < Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >

  • Linq To Sql Concat() dropping fields in created TSQL

    - by user191468
    This is strange. I am moving a stored proc to a service. The TSQL unions multiple selects. To replicate this I created multiple queries resulting in a common new concrete type. Then I issue a return result.ToString(); and the resulting SQL selects have varying numbers of columns specified thus causing an MSSQL Msg 205... using (var db = GetDb()) { var fundInv = from f in db.funds select new Investments { Company = f.company, FullName = f.fullname, Admin = f.admin, Fund = f.fund1, FundCode = f.fundcode, Source = STR_FUNDS, IsPortfolio = false, IsActive = f.active, Strategy = f.strategy, SubStrategy = f.substrategy, AltStrategy = f.altstrategy, AltSubStrategy = f.altsubstrategy, Region = f.region, AltRegion = f.altregion, UseAlternate = f.usealt, ClassesAllowed = f.classallowed }; var stocksInv = from s in db.stocks where !fundInv.Select(f => f.Company).Contains(s.vehcode) select new Investments { Company = s.company, FullName = s.issuer, Admin = STR_PRS, Fund = s.shortname, FundCode = s.vehcode, Source = STR_STOCK, IsPortfolio = false, IsActive = (s.inactive == null), Strategy = s.style, SubStrategy = s.substyle, AltStrategy = s.altstyle, AltSubStrategy = s.altsubsty, Region = s.geography, AltRegion = s.altgeo, UseAlternate = s.usealt, ClassesAllowed = STR_GENERIC }; var bondsInv = from oi in db.bonds where !fundInv.Select(f => f.Company).Contains(oi.vehcode) select new Investments { Company = string.Empty, FullName = oi.issue, Admin = STR_PRS1, Fund = oi.issue, FundCode = oi.vehcode, Source = STR_BONDS, IsPortfolio = false, IsActive = oi.closed, Strategy = STR_OTH, SubStrategy = STR_OTH, AltStrategy = STR_OTH, AltSubStrategy = STR_OTH, Region = STR_OTH, AltRegion = STR_OTH, UseAlternate = false, ClassesAllowed = STR_GENERIC }; return (fundInv.Concat(stocksInv).Concat(bondsInv)).ToList(); } The code above results in a complex select statement where each "table" above has different column count. (see SQL below) I've been trying a few things but no change yet. Ideas are welcome. SELECT [t6].[company] AS [Company], [t6].[fullname] AS [FullName], [t6].[admin] AS [Admin], [t6].[fund] AS [Fund], [t6].[fundcode] AS [FundCode], [t6].[value] AS [Source], [t6].[value2] AS [IsPortfolio], [t6].[active] AS [IsActive], [t6].[strategy] AS [Strategy], [t6].[substrategy] AS [SubStrategy], [t6].[altstrategy] AS [AltStrategy], [t6].[altsubstrategy] AS [AltSubStrategy], [t6].[region] AS [Region], [t6].[altregion] AS [AltRegion], [t6].[usealt] AS [UseAlternate], [t6].[classallowed] AS [ClassesAllowed] FROM ( SELECT [t3].[company], [t3].[fullname], [t3].[admin], [t3].[fund], [t3].[fundcode], [t3].[value], [t3].[value2], [t3].[active], [t3].[strategy], [t3].[substrategy], [t3].[altstrategy], [t3].[altsubstrategy], [t3].[region], [t3].[altregion], [t3].[usealt], [t3].[classallowed] FROM ( SELECT [t0].[company], [t0].[fullname], [t0].[admin], [t0].[fund], [t0].[fundcode], @p0 AS [value], [t0].[active], [t0].[strategy], [t0].[substrategy], [t0].[altstrategy], [t0].[altsubstrategy], [t0].[region], [t0].[altregion], [t0].[usealt], [t0].[classallowed] FROM [zInvest].[funds] AS [t0] UNION ALL SELECT [t1].[company], [t1].[issuer], @p6 AS [value], [t1].[shortname], [t1].[vehcode], @p7 AS [value2], @p8 AS [value3], (CASE WHEN [t1].[inactive] IS NULL THEN 1 ELSE 0 END) AS [value5], [t1].[style], [t1].[substyle], [t1].[altstyle], [t1].[altsubsty], [t1].[geography], [t1].[altgeo], [t1].[usealt], @p10 AS [value6] FROM [zBank].[stocks] AS [t1] WHERE (NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [zInvest].[funds] AS [t2] WHERE [t2].[company] = [t1].[vehcode] ))) AND ([t1].[vehcode] <> @p2) AND (SUBSTRING([t1].[vehcode], @p3 + 1, @p4) <> @p5) ) AS [t3] UNION ALL SELECT @p11 AS [value], [t4].[issue], @p12 AS [value2], [t4].[vehcode], @p13 AS [value3], @p14 AS [value4], [t4].[closed], @p16 AS [value6], @p17 AS [value7] FROM [zMut].[bonds] AS [t4] WHERE NOT (EXISTS( SELECT NULL AS [EMPTY] FROM [zInvest].[funds] AS [t5] WHERE [t5].[company] = [t4].[vehcode] )) ) AS [t6]

    Read the article

  • Synchronization requirements for FileStream.(Begin/End)(Read/Write)

    - by Doug McClean
    Is the following pattern of multi-threaded calls acceptable to a .Net FileStream? Several threads calling a method like this: ulong offset = whatever; // different for each thread byte[] buffer = new byte[8192]; object state = someState; // unique for each call, hence also for each thread lock(theFile) { theFile.Seek(whatever, SeekOrigin.Begin); IAsyncResult result = theFile.BeginRead(buffer, 0, 8192, AcceptResults, state); } if(result.CompletedSynchronously) { // is it required for us to call AcceptResults ourselves in this case? // or did BeginRead already call it for us, on this thread or another? } Where AcceptResults is: void AcceptResults(IAsyncResult result) { lock(theFile) { int bytesRead = theFile.EndRead(result); // if we guarantee that the offset of the original call was at least 8192 bytes from // the end of the file, and thus all 8192 bytes exist, can the FileStream read still // actually read fewer bytes than that? // either: if(bytesRead != 8192) { Panic("Page read borked"); } // or: // issue a new call to begin read, moving the offsets into the FileStream and // the buffer, and decreasing the requested size of the read to whatever remains of the buffer } } I'm confused because the documentation seems unclear to me. For example, the FileStream class says: Any public static members of this type are thread safe. Any instance members are not guaranteed to be thread safe. But the documentation for BeginRead seems to contemplate having multiple read requests in flight: Multiple simultaneous asynchronous requests render the request completion order uncertain. Are multiple reads permitted to be in flight or not? Writes? Is this the appropriate way to secure the location of the Position of the stream between the call to Seek and the call to BeginRead? Or does that lock need to be held all the way to EndRead, hence only one read or write in flight at a time? I understand that the callback will occur on a different thread, and my handling of state, buffer handle that in a way that would permit multiple in flight reads. Further, does anyone know where in the documentation to find the answers to these questions? Or an article written by someone in the know? I've been searching and can't find anything. Relevant documentation: FileStream class Seek method BeginRead method EndRead IAsyncResult interface

    Read the article

  • Node.js + Express.js. How to RENDER less css?

    - by Paden
    Hello all, I am unable to render less css in my express workspace. Here is my current configuration (my css/less files go in 'public/stylo/'): app.configure(function() { app.set('views' , __dirname + '/views' ); app.set('partials' , __dirname + '/views/partials'); app.set('view engine', 'jade' ); app.use(express.bodyDecoder() ); app.use(express.methodOverride()); app.use(express.compiler({ src: __dirname + '/public/stylo', enable: ['less']})); app.use(app.router); app.use(express.staticProvider(__dirname + '/public')); }); Here is my main.jade file: !!! html(lang="en") head title Yea a title link(rel="stylesheet", type="text/css", href="/stylo/main.less") link(rel="stylesheet", href="http://fonts.googleapis.com/cssfamily=Droid+Sans|Droid+Sans+Mono|Ubuntu|Droid+Serif") script(src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js") script(src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.7/jquery-ui.min.js") body!= body here is my main.less css: @import "goodies.css"; body { .googleFont; background-color : #000000; padding : 20px; margin : 0px; > .header { border-bottom : 1px solid #BBB; background-color : #f0f0f0; margin : -25px -25px 30px -25px; /* important */ color : #333; padding : 15px; font-size : 18pt; } } AND here is my goodies.less css: .rounded_corners(@radius: 10px) { -moz-border-radius : @radius; -webkit-border-radius: @radius; border-radius : @radius; } .shadows(@rad1: 0px, @rad2: 1px, @rad3: 3px, @color: #999) { -webkit-box-shadow : @rad1 @rad2 @rad3 @color; -moz-box-shadow : @rad1 @rad2 @rad3 @color; box-shadow : @rad1 @rad2 @rad3 @color; } .gradient (@type: linear, @pos1: left top, @pos2: left bottom, @color1: #f5f5f5, @color2: #ececec) { background-image : -webkit-gradient(@type, @pos1, @pos2, from(@color1), to(@color2)); background-image : -moz-linear-gradient(@color1, @color2); } .googleFont { font-family : 'Droid Serif'; } Cool deal. Now: I have installed less via npm and I had heard from another post that @imports should reference the .css not the .less. In any case, I have tried the combinations of switching .less for .css in the jade and less files with no success. If you can help or have the solution I'd greatly appreciate it. Note: The jade portion works fine if I enter any ol' .css. Note2: The less compiles if I use lessc via command line.

    Read the article

  • Asynchronous callback for network in Objective-C Iphone

    - by vodkhang
    I am working with network request - response in Objective-C. There is something with asynchronous model that I don't understand. In summary, I have a view that will show my statuses from 2 social networks: Twitter and Facebook. When I clicked refresh, it will call a model manager. That model manager will call 2 service helpers to request for latest items. When 2 service helpers receive data, it will pass back to model manager and this model will add all data into a sorted array. What I don't understand here is that : when response from social networks come back, how many threads will handle the response. From my understanding about multithreading and networking (in Java), there must have 2 threads handle 2 responses and those 2 threads will execute the code to add the responses to the array. So, it can have race condition and the program can go wrong right? Is it the correct working model of iphone objective-C? Or they do it in a different way that it will never have race condition and we don't have to care about locking, synchronize? Here is my example code: ModelManager.m - (void)updateMyItems:(NSArray *)items { self.helpers = [self authenticatedHelpersForAction:NCHelperActionGetMyItems]; for (id<NCHelper> helper in self.helpers) { [helper updateMyItems:items]; // NETWORK request here } } - (void)helper:(id <NCHelper>)helper didReturnItems:(NSArray *)items { [self helperDidFinishGettingMyItems:items callback:@selector(model:didGetMyItems:)]; break; } } // some private attributes int *_currentSocialNetworkItemsCount = 0; // to count the number of items of a social network - (void)helperDidFinishGettingMyItems:(NSArray *)items { for (Item *item in items) { _currentSocialNetworkItemsCount ++; } NSLog(@"count: %d", _currentSocialNetworkItemsCount); _currentSocialNetworkItemsCount = 0; } I want to ask if there is a case that the method helperDidFinishGettingMyItems is called concurrently. That means, for example, faceboook returns 10 items, twitter returns 10 items, will the output of count will ever be larger than 10? And if there is only one single thread, how can the thread finishes parsing 1 response and jump to the other response because, IMO, thread is only executed sequently, block of code by block of code

    Read the article

  • impersonation and BackgroundWorker

    - by Lucian D
    Hello guys, I have a little bit of a problem when trying to use the BackgroundWorker class with impersonation. Following the answers from google, I got this code to impersonate public class MyImpersonation{ WindowsImpersonationContext impersonationContext; [DllImport("advapi32.dll")] public static extern int LogonUserA(String lpszUserName, String lpszDomain, String lpszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern int DuplicateToken(IntPtr hToken, int impersonationLevel, ref IntPtr hNewToken); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern bool RevertToSelf(); [DllImport("kernel32.dll", CharSet = CharSet.Auto)] public static extern bool CloseHandle(IntPtr handle); public bool impersonateValidUser(String userName, String domain, String password) { WindowsIdentity tempWindowsIdentity; IntPtr token = IntPtr.Zero; IntPtr tokenDuplicate = IntPtr.Zero; if (RevertToSelf()) { if (LogonUserA(userName, domain, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref token) != 0) { if (DuplicateToken(token, 2, ref tokenDuplicate) != 0) { tempWindowsIdentity = new WindowsIdentity(tokenDuplicate); impersonationContext = tempWindowsIdentity.Impersonate(); if (impersonationContext != null) { CloseHandle(token); CloseHandle(tokenDuplicate); return true; } } } } if (token != IntPtr.Zero) CloseHandle(token); if (tokenDuplicate != IntPtr.Zero) CloseHandle(tokenDuplicate); return false; } } It worked really well until I've used it with the BackgroundWorker class. In this case, I've added a impersonation in the the code that runs asynchronously. I have no errors, but the issue I'm having is that the impersonation does not work when it is used in the async method. In code this looks something like this: instantiate a BGWorker, and add an event handler to the DoWork event: _bgWorker = new BackgroundWorker(); _bgWorker.DoWork += new DoWorkEventHandler(_bgWorker_DoWork); in the above handler, a impersonation is made before running some code. private void _bgWorker_DoWork(object sender, DoWorkEventArgs e) { MyImpersonation myImpersonation = new MyImpersonation(); myImpersonation.impersonateValidUser(user, domain, pass) //run some code... myImpersonation.undoImpersonation(); } the code is launched with BGWorker.RunWorkerAsync(); As I said before, no error is thrown, only that the code acts as if I did't run any impersonation, that is with it's default credentials. Moreover, the impersonation method returns true, so the impersonation took place at a certain level, but probably not on the current thread. This must happen because the async code runs on another thread, so there must be something that needs to be added to the MyImpersonation class. But what?? :) Thanks in advance, Lucian

    Read the article

  • Using Android SAXParser, one my my XML Elements is mysteriously breaking in half.

    - by Drennen
    And its not '&' Im using the SAXParser object do parse the actual XML. This is normally done by passing a URL to the XMLReader.Parse method. Because my XML is coming from a POST request to a webservice, I am saving that result as a String and then employing StringReader / InputSource to feed this string back to the XMLReader.Parse method. However, something strange is happening at the 2001st character of the XMLstring. The 'characters' method of the document handler is being called TWICE in between the startElement and endElement methods, effectively breaking my string (in this case a project title) into two pieces. Because I am instantiating objects in my characters method, I am getting two objects instead of one. This line, about 2000 chars into the string fires 'characters' two times, breaking between "Lower" and "Level" <title>SUMC-BOOKSTORE, LOWER LEVEL RENOVATIONS</title> When I bypass the StringReader / InputSource workaround and feed a flat XML file to XMLReader.Parse, it works absolutely fine. Something about StringReader and or InputSource is somehow screwing this up. Here is my method that takes and XML string and parses is through the SAXParser. public void parseXML(String XMLstring) { try { SAXParserFactory spf = SAXParserFactory.newInstance(); SAXParser sp = spf.newSAXParser(); XMLReader xr = sp.getXMLReader(); xr.setContentHandler(this); // Something is happening in the StringReader or InputSource // That cuts the XML element in half at the 2001 character mark. StringReader sr = new StringReader(XMLstring); InputSource is = new InputSource(sr); xr.parse(is); } catch (IOException e) { Log.e("CMS1", e.toString()); } catch (SAXException e) { Log.e("CMS2", e.toString()); } catch (ParserConfigurationException e) { Log.e("CMS3", e.toString()); } } I would greatly appreciate any ideas on how to not have 'characters' firing off twice when I get to this point in the XML String. Or, show me how to use a POST request and still pass off the URL to the Parse function. THANK YOU.

    Read the article

  • DynamicMethod for ConstructorInfo.Invoke, what do I need to consider?

    - by Lasse V. Karlsen
    My question is this: If I'm going to build a DynamicMethod object, corresponding to a ConstructorInfo.Invoke call, what types of IL do I need to implement in order to cope with all (or most) types of arguments, when I can guarantee that the right type and number of arguments is going to be passed in before I make the call? Background I am on my 3rd iteration of my IoC container, and currently doing some profiling to figure out if there are any areas where I can easily shave off large amounts of time being used. One thing I noticed is that when resolving to a concrete type, ultimately I end up with a constructor being called, using ConstructorInfo.Invoke, passing in an array of arguments that I've worked out. What I noticed is that the invoke method has quite a bit of overhead, and I'm wondering if most of this is just different implementations of the same checks I do. For instance, due to the constructor matching code I have, to find a matching constructor for the predefined parameter names, types, and values that I have passed in, there's no way this particular invoke call will not end up with something it should be able to cope with, like the correct number of arguments, in the right order, of the right type, and with appropriate values. When doing a profiling session containing a million calls to my resolve method, and then replacing it with a DynamicMethod implementation that mimics the Invoke call, the profiling timings was like this: ConstructorInfo.Invoke: 1973ms DynamicMethod: 93ms This accounts for around 20% of the total runtime of this profiling application. In other words, by replacing the ConstructorInfo.Invoke call with a DynamicMethod that does the same, I am able to shave off 20% runtime when dealing with basic factory-scoped services (ie. all resolution calls end up with a constructor call). I think this is fairly substantial, and warrants a closer look at how much work it would be to build a stable DynamicMethod generator for constructors in this context. So, the dynamic method would take in an object array, and return the constructed object, and I already know the ConstructorInfo object in question. Therefore, it looks like the dynamic method would be made up of the following IL: l001: ldarg.0 ; the object array containing the arguments l002: ldc.i4.0 ; the index of the first argument l003: ldelem.ref ; get the value of the first argument l004: castclass T ; cast to the right type of argument (only if not "Object") (repeat l001-l004 for all parameters, l004 only for non-Object types, varying l002 constant from 0 and up for each index) l005: newobj ci ; call the constructor l006: ret Is there anything else I need to consider? Note that I'm aware that creating dynamic methods will probably not be available when running the application in "reduced access mode" (sometimes the brain just won't give up those terms), but in that case I can easily detect that and just calling the original constructor as before, with the overhead and all.

    Read the article

  • Using Objective-C Blocks

    - by Sean
    Today I was experimenting with Objective-C's blocks so I thought I'd be clever and add to NSArray a few functional-style collection methods that I've seen in other languages: @interface NSArray (FunWithBlocks) - (NSArray *)collect:(id (^)(id obj))block; - (NSArray *)select:(BOOL (^)(id obj))block; - (NSArray *)flattenedArray; @end The collect: method takes a block which is called for each item in the array and expected to return the results of some operation using that item. The result is the collection of all of those results. (If the block returns nil, nothing is added to the result set.) The select: method will return a new array with only the items from the original that, when passed as an argument to the block, the block returned YES. And finally, the flattenedArray method iterates over the array's items. If an item is an array, it recursively calls flattenedArray on it and adds the results to the result set. If the item isn't an array, it adds the item to the result set. The result set is returned when everything is finished. So now that I had some infrastructure, I needed a test case. I decided to find all package files in the system's application directories. This is what I came up with: NSArray *packagePaths = [[[NSSearchPathForDirectoriesInDomains(NSAllApplicationsDirectory, NSAllDomainsMask, YES) collect:^(id path) { return (id)[[[NSFileManager defaultManager] contentsOfDirectoryAtPath:path error:nil] collect:^(id file) { return (id)[path stringByAppendingPathComponent:file]; }]; }] flattenedArray] select:^(id fullPath) { return [[NSWorkspace sharedWorkspace] isFilePackageAtPath:fullPath]; }]; Yep - that's all one line and it's horrid. I tried a few approaches at adding newlines and indentation to try to clean it up, but it still feels like the actual algorithm is lost in all the noise. I don't know if it's just a syntax thing or my relative in-experience with using a functional style that's the problem, though. For comparison, I decided to do it "the old fashioned way" and just use loops: NSMutableArray *packagePaths = [NSMutableArray new]; for (NSString *searchPath in NSSearchPathForDirectoriesInDomains(NSAllApplicationsDirectory, NSAllDomainsMask, YES)) { for (NSString *file in [[NSFileManager defaultManager] contentsOfDirectoryAtPath:searchPath error:nil]) { NSString *packagePath = [searchPath stringByAppendingPathComponent:file]; if ([[NSWorkspace sharedWorkspace] isFilePackageAtPath:packagePath]) { [packagePaths addObject:packagePath]; } } } IMO this version was easier to write and is more readable to boot. I suppose it's possible this was somehow a bad example, but it seems like a legitimate way to use blocks to me. (Am I wrong?) Am I missing something about how to write or structure Objective-C code with blocks that would clean this up and make it clearer than (or even just as clear as) the looped version?

    Read the article

  • Crawling engine architecture - Java/ Perl integration

    - by Bigtwinz
    Hi all, I am looking to develop a management and administration solution around our webcrawling perl scripts. Basically, right now our scripts are saved in SVN and are manually kicked off by SysAdmin/devs etc. Everytime we need to retrieve data from new sources we have to create a ticket with business instructions and goals. As you can imagine, not an optimal solution. There are 3 consistent themes with this system: the retrieval of data has a "conceptual structure" for lack of a better phrase i.e. the retrieval of information follows a particular path we are only looking for very specific information so we dont have to really worry about extensive crawling for awhile (think thousands-tens of thousands of pages vs millions) crawls are url-based instead of site-based. As I enhance this alpha version to a more production-level beta I am looking to add automation and management of the retrieval of data. Additionally our other systems are Java (which I'm more proficient in) and I'd like to compartmentalize the perl aspects so we dont have to lean heavily on outside help. I've evaluated the usual suspects Nutch, Droid etc but the time spent on modifying those frameworks to suit our specific information retrieval cant be justified. So I'd like your thoughts regarding the following architecture. I want to create a solution which use Java as the interface for managing and execution of the perl scripts use Java for configuration and data access stick with perl for retrieval An example use case would be a data analyst delivers us a requirement for crawling perl developer creates the required script and uses this webapp to submit the script (which gets saved to the filesystem) the script gets kicked off from the webapp with specific parameters .... Webapp should be able to create multiple threads of the perl script to initiate multiple crawlers. So questions are what do you think how solid is integration between Java and Perl specifically from calling perl from java has someone used such a system which actually is part perl repository The goal really is to not have a whole bunch of unorganized perl scripts and put some management and organization on our information retrieval. Also, I know I can use perl do do the web part of what we want - but as I mentioned before - trying to keep perl focused. But it seems assbackwards I'm not adverse to making it an all perl solution. Open to any all suggestions and opinions. Thanks

    Read the article

  • firefox, jQuery ajax calls firing twice and never triggering success or error functions

    - by Adrian Adkison
    Hi, I am developing with the .NET framework, using jQuery 1.4.2 client side. When developing in Firefox version 3.6, every so often an one of the many ajax calls I make on the page will fire twice, the second will return successfully but will not trigger the success handler of the ajax call and the first never returns anything. So basically the data is all sent to the server and response is sent down but nothing happens with the response. Here is an example of the call I am making. It happens to any of the ajax calls, so there is not one particular that is causing the problem: $.ajax({ type:"POST", contentType : "application/json; charset=utf-8", data:"{}", dataType:"json", success:function(){ alert('success'); }, error:function(){ alert('error'); }, url:'/services.aspx/somemethod' }); }) From firebug, here are the headers of the first call which in firebug shows as never completely responding, meaning i see no response code and the loader gif in the firebug never goes away. Note:In firebug it usually says Response Header but for the first call this space is blank Server ASP.NET Development Server/9.0.0.0 X-AspNet-Version 2.0.50727 Content-Type application/json; charset=utf-8 Connection Close Request Headers Host mydomain.com User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401Firefox/3.6.3 ( .NET CLR 3.5.30729) Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/json; charset=utf-8 X-Requested-With XMLHttpRequest Referer http://mydomain.com/mypage.aspx Here is the header from the second request which just appear to complete in firebug (i.e response is 200): Response Header Server ASP.NET Development Server/9.0.0.0 X-AspNet-Version 2.0.50727 Content-Type application/json; charset=utf-8 Connection Close Request Headers Host mydomain.com User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729) Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/json; charset=utf-8 Referer http://mydomain.com/mypage.aspx To summarize my question, why are two requests being made and why are neither of them triggering a success or error handler in the ajax call. I have seen this article about firefox 3.5+ and preflighted requests https://developer.mozilla.org/En/HTTP_access_control#Preflighted_requests In the article is says if a "POST" is made with any other content type than "application/x-www-form-urlencoded, multipart/form-data, or text/plain" than the request is pre-flighted. If this is the case, this should happen to all of my calls. Thanks

    Read the article

  • How to run a powershell script within a DOS batch file

    - by Don Vince
    How do I have a powershell script embedded within the same file as a DOS batch script? I know this kind of thing is possible in other scenarios: Embedding SQL in a DOS batch script using sqlcmd and a clever arrangements of goto's and comments at the beginning of the file In a *nix environment having a the name of the program you wish to run the script with on the first line of the script commented out e.g. #!/usr/local/bin/python There may not be a way to do this - in which case I will have to call the separate powershell script from the launching DOS script. One possible solution I've considered is to echo out the powershell script, and then run it. A good reason to not do this is that part of the reason to attempt this is to be using the advantages of the powershell environment without the pain of, for example, DOS escape characters I have some unusual constraints and would like to find an elegant solution. I suspect this question may be baiting responses of the variety: "why don't you try and solve this different problem instead." Suffice to say these are my constraints, sorry about that. Any ideas? Is there a suitable combination of clever comments and escape characters that will enable me to achieve this? Some thoughts on how to achieve this: A carat ^ at the end of a line in DOS is a continuation - like an underscore in VB An ampersand & in DOS typically is used to separate commands echo Hello & echo World results in 2 echos on separate lines %0 will give you the script that's currently running So something like this (if I could make it work) would be good: # & call powershell -psconsolefile %0 # & goto :EOF /* From here on in we're running nice juicy powershell code */ Write-Output "Hello World" Except... It doesn't work... because the extension of the file isn't as per powershell's liking: Windows PowerShell console file "insideout.bat" extension is not psc1. Windows PowerShell console file extension must be psc1. DOS isn't really altogether happy with the situation either - although it does stumble on '#' is not recognized as an internal or external command, operable program or batch file.

    Read the article

  • Featureful commercial text editors?

    - by wrp
    I'm willing to buy tools if they add genuine value over a FOSS equivalent. One thing I wouldn't mind having is an editor with the power of Emacs, but made more user-friendly. There seem to be several commercial editors out there, but I can't find much discussion of them online. Maybe it's because the kind of people who use commercial software don't have time to do much blogging. ;-) If you have used any, what was your evaluation? I'd especially like to hear how you would compare them to Emacs. I'm thinking of editors like VEDIT, Boxer, Crisp, UltraEdit, SlickEdit, etc. To get things started, I tried EditPad Pro because I needed something on a Win98SE box. I was attracted by its powerful support for regexps, but I didn't use it for long. One annoyance was that find-in-files was only available in a separate product you had to buy. The main problem, though, was stability. It sometimes hung and I lost a few files because it corrupted them while editing. After a couple weeks, I found that I was avoiding using it, so I just uninstalled. Edit: Ah...I need to remove some ambiguity. With reference to Emacs, "power" often means its potential for customization. This malleability comes from having an architecture in which most of the functionality is written in a scripting language that runs on a compiled core. Emacs (with elisp) is by far the most widely known such system among home users, but there have been other heavily used editors such as Freemacs (MINT), JED (S-Lang), XEDIT (Rexx), ADAM (TPU), and SlickEdit (Slick-C). In this case, by "power" I'm not referring to extensibility but to realized features. There are three main areas which I think a commercial text editor might be an improvement over Emacs: Stability The only apps I regularly use on Linux that give me flaky behavior are Emacs, Gedit, and Geany. On Windows, I like the look and features of Notepad++, but I find it extremely unstable, especially if I try to use the plugins. Whatever I happen to be doing, I'm using some text editor practically all day long. If I could switch to an editor that never gave me problems, it would definitely lower my stress level. Tools When I started using Emacs, I searched the manual cover to cover to gleam ideas for clever, useful things I could do with it. I'd like to see lots of useful features for editing code, based on detailed knowledge of what the system can do and the accumulated feedback of users. Polish The rule of threes goes that if you develop something for yourself, it's three times harder to make it usable in-house, and three times harder again to make it a viable product for sale. It's understandable, but free software development doesn't seem to benefit from much usability testing. BTW, texteditors.org is a fantastic resource for researching text editors.

    Read the article

  • Post parameters are becoming null (at random)

    - by Raghuram Duraisamy
    Hi, My web application is developed with Struts2 and it was working fine till recently. All of a sudden one of the modules has started malfunctioning. The malfunctioning module is 'Update Student details' page. This page has a lot of fields like 'schoolName', 'degreeName', etc . School 1: <input name="schoolName"> School 2: <input name="schoolName"> ..... School n: <input name="schoolName"> As mentioned earlier, the page was working perfectly fine till recently. Now, one/many of the values of 'schoolName', 'degreeName', etc are being received as "" (EMPTY STRING) on the server-side. For debugging, I used firebug and remote-debugging in eclipse. I find that the post-parameters are correct on the client-side. For instance, during one of the submissions the post-parameters were as below (i noted them from firebug). Content-Type: multipart/form-data; boundary=---------------------------2921238217421 Content-Length: 48893 <OTHER_PARAMETERS> <!--Truncated for clarity --> -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" ABC Institute -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" Test School -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" XYZ -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" Texas Institute -----------------------------2921238217421 Content-Disposition: form-data; name="schoolName" XXXX School -----------------------------2921238217421-- But on the server-side, the request params were as below: schoolName=[ABC Institute, Test School, XYZ, , XXXX School], "Texas Institute" was received as "" (EMPTY STRING) in this particular case. This is not happening consistently. The parameters that become NULL (or EMPTY STRING) seem random to me - during one instance, parameter schoolName[3] became null as illustrated above, parameter schoolName[2] became null during yet another submission, etc. At times, none of the parameters are nullified. The following is the list of the interceptors in the action definition. List of interceptors: ---------------------- FileUploadInterceptor org.apache.struts2.interceptor.FileUploadInterceptor ServletConfigInterceptor org.apache.struts2.interceptor.ServletConfigInterceptor StaticParametersInterceptor com.opensymphony.xwork2.interceptor.StaticParametersInterceptor ParametersInterceptor com.opensymphony.xwork2.interceptor.ParametersInterceptor MyCustomInterceptor com.xxxx.yyyy.interceptors.GetLoggedOnUserInterceptor This issue appears rather weird to me and I have not been able to zero-in on the exact cause of the issue. Any help in this regard would be highly appreciated. Thanks in advance. Thanks, Raghuram

    Read the article

  • Error using httlib's HTTPSConnection with PKCS#12 certificate

    - by Remi Despres-Smyth
    Hello. I'm trying to use httplib's HTTPSConnection for client validation, using a PKCS #12 certificate. I know the certificate is good, as I can connect to the server using it in MSIE and Firefox. Here's my connect function (the certificate includes the private key). I've pared it down to just the basics: def connect(self, cert_file, host, usrname, passwd): self.cert_file = cert_file self.host = host self.conn = httplib.HTTPSConnection(host=self.host, port=self.port, key_file=cert_file, cert_file=cert_file) self.conn.putrequest('GET', 'pathnet/,DanaInfo=200.222.1.1+') self.conn.endheaders() retCreateCon = self.conn.getresponse() if is_verbose: print "Create HTTPS connection, " + retCreateCon.read() (Note: No comments on the hard-coded path, please - I'm trying to get this to work first; I'll make it pretty afterwards. The hard-coded path is correct, as I connect to it in MSIE and Firefox. I changed the IP address for the post.) When I try to run this using a PKCS#12 certificate (a .pfx file), I get back what appears to be an openSSL error. Here is the entire error traceback: File "Usinghttplib_Test.py", line 175, in t.connect(cert_file=opts["-keys"], host=host_name, usrname=opts["-username"], passwd=opts["-password"]) File "Usinghttplib_Test.py", line 40, in connect self.conn.endheaders() File "c:\python26\lib\httplib.py", line 904, in endheaders self._send_output() File "c:\python26\lib\httplib.py", line 776, in _send_output self.send(msg) File "c:\python26\lib\httplib.py", line 735, in send self.connect() File "c:\python26\lib\httplib.py", line 1112, in connect self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file) File "c:\python26\lib\ssl.py", line 350, in wrap_socket suppress_ragged_eofs=suppress_ragged_eofs) File "c:\python26\lib\ssl.py", line 113, in __init__ cert_reqs, ssl_version, ca_certs) ssl.SSLError: [Errno 336265225] _ssl.c:337: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib Notice, the openSSL error (the last entry in the list) notes "PEM lib", which I found odd, since I'm not trying to use a PEM certificate. For kicks, I converted the PKCS#12 cert to a PEM cert, and ran the same code using that. In that case, I received no error, I was prompted to enter the PEM pass phrase, and the code did attempt to reach the server. (I received the response "The service is not available. Please try again later.", but I believe that would be because the server does not accept the PEM cert. I can't connect in Firefox to the server using the PEM cert either.) Is httplib's HTTPSConnection supposed to support PCKS#12 certificates? (That is, pfx files.) If so, why does it look like openSSL is trying to load it inside the PEM lib? Am I doing this all wrong? Any advice is welcome. EDIT: The certificate file contains both the certificate and the private key, which is why I'm providing the same file name for both the HTTPSConnection's key_file and cert_file parameters.

    Read the article

  • How to write a buffer-overflow exploit in GCC,windows XP,x86?

    - by Mask
    void function(int a, int b, int c) { char buffer1[5]; char buffer2[10]; int *ret; ret = buffer1 + 12; (*ret) += 8;//why is it 8?? } void main() { int x; x = 0; function(1,2,3); x = 1; printf("%d\n",x); } The above demo is from here: http://insecure.org/stf/smashstack.html But it's not working here: D:\test>gcc -Wall -Wextra hw.cpp && a.exe hw.cpp: In function `void function(int, int, int)': hw.cpp:6: warning: unused variable 'buffer2' hw.cpp: At global scope: hw.cpp:4: warning: unused parameter 'a' hw.cpp:4: warning: unused parameter 'b' hw.cpp:4: warning: unused parameter 'c' 1 And I don't understand why it's 8 though the author thinks: A little math tells us the distance is 8 bytes. My gdb dump as called: Dump of assembler code for function main: 0x004012ee <main+0>: push %ebp 0x004012ef <main+1>: mov %esp,%ebp 0x004012f1 <main+3>: sub $0x18,%esp 0x004012f4 <main+6>: and $0xfffffff0,%esp 0x004012f7 <main+9>: mov $0x0,%eax 0x004012fc <main+14>: add $0xf,%eax 0x004012ff <main+17>: add $0xf,%eax 0x00401302 <main+20>: shr $0x4,%eax 0x00401305 <main+23>: shl $0x4,%eax 0x00401308 <main+26>: mov %eax,0xfffffff8(%ebp) 0x0040130b <main+29>: mov 0xfffffff8(%ebp),%eax 0x0040130e <main+32>: call 0x401b00 <_alloca> 0x00401313 <main+37>: call 0x4017b0 <__main> 0x00401318 <main+42>: movl $0x0,0xfffffffc(%ebp) 0x0040131f <main+49>: movl $0x3,0x8(%esp) 0x00401327 <main+57>: movl $0x2,0x4(%esp) 0x0040132f <main+65>: movl $0x1,(%esp) 0x00401336 <main+72>: call 0x4012d0 <function> 0x0040133b <main+77>: movl $0x1,0xfffffffc(%ebp) 0x00401342 <main+84>: mov 0xfffffffc(%ebp),%eax 0x00401345 <main+87>: mov %eax,0x4(%esp) 0x00401349 <main+91>: movl $0x403000,(%esp) 0x00401350 <main+98>: call 0x401b60 <printf> 0x00401355 <main+103>: leave 0x00401356 <main+104>: ret 0x00401357 <main+105>: nop 0x00401358 <main+106>: add %al,(%eax) 0x0040135a <main+108>: add %al,(%eax) 0x0040135c <main+110>: add %al,(%eax) 0x0040135e <main+112>: add %al,(%eax) End of assembler dump. Dump of assembler code for function function: 0x004012d0 <function+0>: push %ebp 0x004012d1 <function+1>: mov %esp,%ebp 0x004012d3 <function+3>: sub $0x38,%esp 0x004012d6 <function+6>: lea 0xffffffe8(%ebp),%eax 0x004012d9 <function+9>: add $0xc,%eax 0x004012dc <function+12>: mov %eax,0xffffffd4(%ebp) 0x004012df <function+15>: mov 0xffffffd4(%ebp),%edx 0x004012e2 <function+18>: mov 0xffffffd4(%ebp),%eax 0x004012e5 <function+21>: movzbl (%eax),%eax 0x004012e8 <function+24>: add $0x5,%al 0x004012ea <function+26>: mov %al,(%edx) 0x004012ec <function+28>: leave 0x004012ed <function+29>: ret In my case the distance should be - = 5,right?But it seems not working.. Why function needs 56 bytes for local variables?( sub $0x38,%esp )

    Read the article

  • Deletes not cascading for self-referencing entities

    - by jwaddell
    I have the following (simplified) Hibernate entities: @Entity @Table(name = "package") public class Package { protected Content content; @OneToOne(cascade = {javax.persistence.CascadeType.ALL}) @JoinColumn(name = "content_id") @Fetch(value = FetchMode.JOIN) public Content getContent() { return content; } public void setContent(Content content) { this.content = content; } } @Entity @Table(name = "content") public class Content { private Set<Content> subContents = new HashSet<Content>(); private ArchivalInformationPackage parentPackage; @OneToMany(fetch = FetchType.EAGER) @JoinTable(name = "subcontents", joinColumns = {@JoinColumn(name = "content_id")}, inverseJoinColumns = {@JoinColumn(name = "elt")}) @Cascade(value = {org.hibernate.annotations.CascadeType.DELETE, org.hibernate.annotations.CascadeType.REPLICATE}) @Fetch(value = FetchMode.SUBSELECT) public Set<Content> getSubContents() { return subContents; } public void setSubContents(Set<Content> subContents) { this.subContents = subContents; } @ManyToOne(cascade = {CascadeType.ALL}) @JoinColumn(name = "parent_package_id") public Package getParentPackage() { return parentPackage; } public void setParentPackage(Package parentPackage) { this.parentPackage = parentPackage; } } So there is one Package, which has one "top" Content. The top Content links back to the Package, with cascade set to ALL. The top Content may have many "sub" Contents, and each sub-Content may have many sub-Contents of its own. Each sub-Content has a parent Package, which may or may not be the same Package as the top Content (ie a many-to-one relationship for Content to Package). The relationships are required to be ManyToOne (Package to Content) and ManyToMany (Content to sub-Contents) but for the case I am currently testing each sub-Content only relates to one Package or Content. The problem is that when I delete a Package and flush the session, I get a Hibernate error stating that I'm violating a foreign key constraint on table subcontents, with a particular content_id still referenced from table subcontents. I've tried specifically (recursively) deleting the Contents before deleting the Package but I get the same error. Is there a reason why this entity tree is not being deleted properly? EDIT: After reading answers/comments I realised that a Content cannot have multiple Packages, and a sub-Content cannot have multiple parent-Contents, so I have modified the annotations from ManyToOne and ManyToMany to OneToOne and OneToMany. Unfortunately that did not fix the problem. I have also added the bi-directional link from Content back to the parent Package which I left out of the simplified code.

    Read the article

  • NSObjectController confusion binding to a class property. Help!

    - by scottw
    Hi, I'm teaching myself cocoa and enjoying the experience most of the time. I have been struggling all day with a simple problem that google has let me down on. I have read the Cocoa Bindings Program Topics and think I grok it but still can't solve my issue. I have a very simple class called MTSong that has various properties. I have used @synthesize to create getter/setters and can use KVC to change properties. i.e in my app controller the following works: mySong = [[MTSong alloc]init]; [mySong setValue:@"2" forKey:@"version"]; In case I am doing something noddy in my class code MTSong.h is: #import <Foundation/Foundation.h> @interface MTSong : NSObject { NSNumber *version; NSString *name; } @property(readwrite, assign) NSNumber *version; @property(readwrite, assign) NSString *name; @end and MTSong.m is: #import "MTSong.h" @implementation MTSong - (id)init { [super init]; return self; } - (void)dealloc { [super dealloc]; } @synthesize version; @synthesize name; @end In Interface Builder I have a label (NSTextField) that I want to update whenever I use KVC to change the version of the song. I do the following: Drag NSObjectController object into the doc window and in the Inspector-Attributes I set: Mode: Class Class Name: MTSong Add a key called version and another called name Go to Inspector-Bindings-Controller Content Bind To: File's Owner (Not sure this is right...) Model Key Path: version Select the cell of the label and go to Inspector Bind to: Object Controller Controller Key: mySong Model Key Path: version I have attempted changing the Model Key Path in step 2 to "mySong" which makes more sense but the compiler complains. Any suggestions would be greatly appreciated. Scott Update Post Comments I wasn't exposing mySong property so have changed my AppController.h to be: #import <Cocoa/Cocoa.h> @class MTSong; @interface AppController : NSObject { IBOutlet NSButton *start; IBOutlet NSTextField *tf; MTSong *mySong; } -(IBAction)convertFile:(id)sender; @end I suspect File's owner was wrong as I am not using a document based application and I need to bind to the AppController, so step 2 is now: Go to Inspector-Bindings-Controller Content Bind To: App Controller Model Key Path: mySong I needed to change 3. to Select the cell of the label and go to Inspector Bind to: Object Controller Controller Key: selection Model Key Path: version All compiles and is playing nice!

    Read the article

  • a non recursive approach to the problem of generating combinations at fault

    - by mark
    Hi, I wanted a non recursive approach to the problem of generating combination of certain set of characters or numbers. So, given a subset k of numbers n, generate all the possible combination n!/k!(n-k)! The recursive method would give a combination, given the previous one combination. A non recursive method would generate a combination of a given value of loop index i. I approached the problem with this code: Tested with n = 4 and k = 3, and it works, but if I change k to a number 3 it does not work. Is it due to the fact that (n-k)! in case of n = 4 and k = 3 is 1. and if k 3 it will be more than 1? Thanks. int facto(int x); int len,fact,rem=0,pos=0; int str[7]; int avail[7]; str[0] = 1; str[1] = 2; str[2] = 3; str[3] = 4; str[4] = 5; str[5] = 6; str[6] = 7; int tot=facto(n) / facto(n-k) / facto(k); for (int i=0;i<tot;i++) { avail[0]=1; avail[1]=2; avail[2]=3; avail[3]=4; avail[4]=5; avail[5]=6; avail[6]=7; rem = facto(i+1)-1; cout<<rem+1<<". "; for(int j=len;j>0;j--) { int div = facto(j); pos = rem / div; rem = rem % div; cout<<avail[pos]<<" "; avail[pos]=avail[j]; } cout<<endl; } int facto(int x) { int fact=1; while(x0) fact*=x--; return fact; }

    Read the article

  • XCode linking error when targeting armv7.

    - by Tom
    I've already spent countless hours puzzling over this, utilizing Google searches and other Stack Overflow questions to no avail. I have an iPhone/iPad universal application, which seems to compile fine when the target is armv6. However, when the device is iPad, I get this warning: warning: building for SDK 'Device - iPhone OS 3.2' requires an armv7 architecture. Oddly enough, the app still runs great on iPad in spite of this warning. However, I do want to do things the "right way" what ever that means in this case. When I switch the target architecture to armv7, I get linking errors: "___restore_vfp_d8_d15_regs", referenced from: *redacted* "___save_vfp_d8_d15_regs", referenced from: *redacted* ld: symbol(s) not found collect2: ld returned 1 exit status The "redacted" portions of the errors are references to the static library to which I'm trying to link. Here's what I've tried from the many suggestions online. Each of these were suggested more than once without any explanation, which leads me to believe nobody quite understands this problem: "Never use the drop down menu in the upper left of the XCode window to choose the target. Instead, set this to Base SDK and then the Base SDK to iPhone OS 3.0 in the target configuration. Set the target device to your preferred target (iPad, iPhone OS 3.2 in my situation.)" This yields the error "Library not found for -lcrt1.3.1.o" "Make sure that GCC isn't linking against the wrong version of the standard library. (You'll have to make sure the LIBRARY_SEARCH_PATH doesn't have the wrong path in it.)" My LIBRARY_SEARCH_PATH is already empty, so this doesn't seem relevant. "Try compiling with GCC 4.0 rather than GCC 4.2." I get a syntax error inside a UIKit header file. The error is "Syntax error before 'AT_NAME' token." The line is "UIKIT_EXTERN @interface UILocalizedIndexedCollation : NSObject." Another project compiles just fine with the same target settings, which is really making me question my sanity. Could I be dealing with a corrupt XCode project? If anyone knows what's actually happening and has a reference or doesn't mind explaining it, I would be so very grateful. Cheers!

    Read the article

  • Is MVVM pointless?

    - by joebeazelman
    Is orthodox MVVM implementation pointless? I am creating a new application and I considered Windows Forms and WPF. I chose WPF because it's future-proof and offer lots of flexibility. There is less code and easier to make significant changes to your UI using XAML. Since the choice for WPF is obvious, I figured that I may as well go all the way by using MVVM as my application architecture since it offers blendability, separation concerns and unit testability. Theoretically, it seems beautiful like the holy grail of UI programming. This brief adventure; however, has turned into a real headache. As expected in practice, I’m finding that I’ve traded one problem for another. I tend to be an obsessive programmer in that I want to do things the right way so that I can get the right results and possibly become a better programmer. The MVVM pattern just flunked my test on productivity and has just turned into a big yucky hack! The clear case in point is adding support for a Modal dialog box. The correct way is to put up a dialog box and tie it to a view model. Getting this to work is difficult. In order to benefit from the MVVM pattern, you have to distribute code in several places throughout the layers of your application. You also have to use esoteric programming constructs like templates and lamba expressions. Stuff that makes you stare at the screen scratching your head. This makes maintenance and debugging a nightmare waiting to happen as I recently discovered. I had an about box working fine until I got an exception the second time I invoked it, saying that it couldn’t show the dialog box again once it is closed. I had to add an event handler for the close functionality to the dialog window, another one in the IDialogView implementation of it and finally another in the IDialogViewModel. I thought MVVM would save us from such extravagant hackery! There are several folks out there with competing solutions to this problem and they are all hacks and don’t provide a clean, easily reusable, elegant solution. Most of the MVVM toolkits gloss over dialogs and when they do address them, they are just alert boxes that don’t require custom interfaces or view models. I’m planning on giving up on the MVVM view pattern, at least its orthodox implementation of it. What do you think? Has it been worth the trouble for you if you had any? Am I just a incompetent programmer or does MVVM not what it's hyped up to be?

    Read the article

  • Extract information from conditional formula

    - by Ken Williams
    I'd like to write an R function that accepts a formula as its first argument, similar to lm() or glm() and friends. In this case, it's a function that takes a data frame and writes out a file in SVMLight format, which has this general form: <line> .=. <target> <feature>:<value> <feature>:<value> ... <feature>:<value> # <info> <target> .=. +1 | -1 | 0 | <float> <feature> .=. <integer> | "qid" <value> .=. <float> <info> .=. <string> for example, the following data frame: result qid f1 f2 f3 f4 f5 f6 f7 f8 1 -1 1 0.0000 0.1253 0.0000 0.1017 0.00 0.0000 0.0000 0.9999 2 -1 1 0.0098 0.0000 0.0000 0.0000 0.00 0.0316 0.0000 0.3661 3 1 1 0.0000 0.0000 0.1941 0.0000 0.00 0.0000 0.0509 0.0000 4 -1 2 0.0000 0.2863 0.0948 0.0000 0.34 0.0000 0.7428 0.0608 5 1 2 0.0000 0.0000 0.0000 0.4347 0.00 0.0000 0.9539 0.0000 6 1 2 0.0000 0.7282 0.9087 0.0000 0.00 0.0000 0.0000 0.0355 would be represented as follows: -1 qid:1 2:0.1253 4:0.1017 8:0.9999 -1 qid:1 1:0.0098 6:0.0316 8:0.3661 1 qid:1 3:0.1941 7:0.0509 -1 qid:2 2:0.2863 3:0.0948 5:0.3400 7:0.7428 8:0.0608 1 qid:2 4:0.4347 7:0.9539 1 qid:2 2:0.7282 3:0.9087 8:0.0355 The function I'd like to write would be called something like this: write.svmlight(result ~ f1+f2+f3+f4+f5+f6+f7+f8 | qid, data=mydata, file="out.txt") Or even write.svmlight(result ~ . | qid, data=mydata, file="out.txt") But I can't figure out how to use model.matrix() and/or model.frame() to know what columns it's supposed to write. Are these the right things to be looking at? Any help much appreciated!

    Read the article

  • iPhone UIWebView: loadData does not work with certain types (Excel, MSWord, PPT, RTF)

    - by Thomas Tempelmann
    My task is to display the supported document types on an iPhone with OS 3.x, such as .pdf, .rtf, .doc, .ppt, .png, .tiff etc. Now, I have stored these files only encrypted on disk. For security reasons, I want to avoid storing them unencrypted on disk. Hence, I prefer to use loadData:MIMEType:textEncodingName:baseURL: instead of loadRequest: to display the document because loadData allows me to pass the content in a NSData object, i.e. I can decrypt the file in memory and have no need to store it on disk, as it would be required when using loadRequest. The problem is that loadData does not appear to work with all file types: Testing shows that all picture types seem to work fine, as well as PDFs, while the more complex types don't. I get a errors such as: NSURLErrorDomain Code=100 NSURLErrorDomain Code=102 WebView appears to need a truly working URL for accessing the documents as a file, despite me offering all content via the NSData object already. Here's the code I use to display the content: [webView loadData:data MIMEType:type textEncodingName:@"utf-8" baseURL:nil]; The mime-type is properly set, e.g. to "application/msword" for .doc files. Does anyone know how I could get loadData to work with all types that loadRequest supports? Or, alternatively, is there some way I can tell which types do work for sure (i.e. officially sanctioned by Apple) with loadData? Then I can work twofold, creating a temp unencrypted file only for those cases that loadData won't like. Update Looks like I'm not the first one running into this. See here: http://osdir.com/ml/iPhoneSDKDevelopment/2010-03/msg00216.html So, I guess, that's the status quo, and nothing I can do about it. Someone suggested a work-around which might work, though: http://osdir.com/ml/iPhoneSDKDevelopment/2010-03/msg00219.html Basically, the idea is to provide a tiny http server that serves the file (from memory in my case), and then use loadRequest. This is probably a bit more memory-intensive, though, as both the server and the webview will probably both hold the entire contents in memory as two copies then, as opposed to using loadData, where both would rather share the same data object. (Mind you, I'll have to hold the decrypted data in memory, that's the whole point here).

    Read the article

  • Why default constructor does not appear for value types?

    - by Arun
    The below snippet gives me a list of constructors and methods of a type. static void ReflectOnType(Type type) { Console.WriteLine(type.FullName); Console.WriteLine("------------"); List<ConstructorInfo> constructors = type.GetConstructors(BindingFlags.Public | BindingFlags.Static | BindingFlags.NonPublic |BindingFlags.Instance | BindingFlags.Default).ToList(); List<MethodInfo> methods = type.GetMethods().ToList(); Type baseType = type.BaseType; while (baseType != null) { constructors.AddRange(baseType.GetConstructors(BindingFlags.Public | BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Default)); methods.AddRange(baseType.GetMethods()); baseType = baseType.BaseType; } Console.WriteLine("Reflection on {0} type", type.Name); for (int i = 0; i < constructors.Count; i++) { Console.Write("Constructor: {0}.{1}", constructors[i].DeclaringType.Name, constructors[i].Name); Console.Write("("); ParameterInfo[] parameterInfos = constructors[i].GetParameters(); if (parameterInfos.Length > 0) { for (int j = 0; j < parameterInfos.Length; j++) { if (j > 0) { Console.Write(", "); } Console.Write("{0} {1}", parameterInfos[j].ParameterType, parameterInfos[j].Name); } } Console.Write(")"); if (constructors[i].IsSpecialName) { Console.Write(" has 'SpecialName' attribute"); } Console.WriteLine(); } Console.WriteLine(); for (int i = 0; i < methods.Count; i++) { Console.Write("Method: {0}.{1}", methods[i].DeclaringType.Name, methods[i].Name); // Determine whether or not each field is a special name. if (methods[i].IsSpecialName) { Console.Write(" has 'SpecialName' attribute"); } Console.WriteLine(); } } But when I pass an ‘int’ type to this method, why don’t I see the implicit constructor in the output? Or, how do I modify the above code to list the default constructor as well (in case I’m missing something in my code).

    Read the article

  • jqmodal IE (7 or 8) flashes black before modal loaded

    - by brad
    This is killing me. In both IE7 and 8, using jqModal, the screen flashes black before the modal content is loaded. I've set up a test app to show you what's happening. I've taken jqModal EXACTLY from the site, no changes whatsoever, no external css that could be affecting my app. It works perfectly in every other browser (including IE6). http://jqmtest.heroku.com/ So, first two links are ajax calls, second is straight up inline HTML. (I originally thought it was the ajax that was affecting it, but that doesn't seem to be the case, I then thought it was slow loading ajax, hence to two differen ajax links) What's crazy is that the jqmodal site itself works perfectly in IE, no flashing of black, but I can't see what I'm doing wrong. Code is straight forward html: <body> <div id="ajaxModal" class="jqmWindow"></div> <div id="inlineModal" class="jqmWindow"> <div style="height:300px;position:relative;"> <p>Here's some inline content</p> <a href="#" onclick='$("#inlineModal").jqmHide();return false;' style="position:absolute;bottom:10px;right:10px">Close</a> </div> </div> <div style="width:600px;height:400px;margin:auto;background:#eee;"> <p><a href="/ajax/short" class="jqModal">Short loading modal</a></p> <br /> <p><a href="/ajax/long" class="jqModal">Longer loading modal</a></p> <br /> <p><a href="#" class="jqInline">inline modal</a></p> </div> </body> Javascript: <script type="text/javascript"> $(function(){ $("#ajaxModal").jqm({ajax:'@href', modal:true}); $("#inlineModal").jqm({modal:true, trigger:'.jqInline'}); }); </script> CSS is exactly the same as the one downloaded from jqModal's site so I'll omit it, but you can see it on my app Has anyone experienced this? I don't get how his works and mine doesn't.

    Read the article

  • custom listview adapter getView method being called multiple times, and in no coherent order

    - by edzillion
    I have a custom list adapter: class ResultsListAdapter extends ArrayAdapter<RecordItem> { in the overridden 'getView' method I do a print to check what position is and whether it is a convertView or not: @Override public View getView(int position, View convertView, ViewGroup parent) { System.out.println("getView " + position + " " + convertView); The output of this (when the list is first displayed, no user input as yet) 04-11 16:24:05.860: INFO/System.out(681): getView 0 null 04-11 16:24:29.020: INFO/System.out(681): getView 1 android.widget.RelativeLayout@43d415d8 04-11 16:25:48.070: INFO/System.out(681): getView 2 android.widget.RelativeLayout@43d415d8 04-11 16:25:49.110: INFO/System.out(681): getView 3 android.widget.RelativeLayout@43d415d8 04-11 16:25:49.710: INFO/System.out(681): getView 0 android.widget.RelativeLayout@43d415d8 04-11 16:25:50.251: INFO/System.out(681): getView 1 null 04-11 16:26:01.300: INFO/System.out(681): getView 2 null 04-11 16:26:02.020: INFO/System.out(681): getView 3 null 04-11 16:28:28.091: INFO/System.out(681): getView 0 null 04-11 16:37:46.180: INFO/System.out(681): getView 1 android.widget.RelativeLayout@43cff8f0 04-11 16:37:47.091: INFO/System.out(681): getView 2 android.widget.RelativeLayout@43cff8f0 04-11 16:37:47.730: INFO/System.out(681): getView 3 android.widget.RelativeLayout@43cff8f0 AFAIK, though I couldn't find it stated explicitly, getView() is only called for visible rows. Since my app starts with four visible rows at least the position numbers cycling from 0-3 makes sense. But the rest is a mess: Why is getview called for each row four times? Where are these convertViews coming from when I haven't scrolled yet? I did a bit of reseach, and without getting a good answer, I did notice that people were associating this issue with layout issues. So in case, here's the layout that contains the list: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="fill_parent" android:layout_width="fill_parent" android:orientation="vertical" > <TextView android:id="@+id/pageDetails" android:layout_width="fill_parent" android:layout_height="wrap_content" /> <ListView android:id="@+id/list" android:layout_width="fill_parent" android:layout_height="wrap_content" android:drawSelectorOnTop="false" /> </LinearLayout> and the layout of each individual row: <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="108dp" android:padding="4dp"> <ImageView android:id="@+id/thumb" android:layout_width="120dp" android:layout_height="fill_parent" android:layout_alignParentTop="true" android:layout_alignParentBottom="true" android:layout_alignParentLeft="true" android:layout_marginRight="8dp" android:src="@drawable/loading" /> <TextView android:id="@+id/price" android:layout_width="wrap_content" android:layout_height="18dp" android:layout_toRightOf="@id/thumb" android:layout_alignParentBottom="true" android:singleLine="true" /> <TextView android:id="@+id/date" android:layout_width="wrap_content" android:layout_height="18dp" android:layout_alignParentBottom="true" android:layout_alignParentRight="true" android:paddingRight="4dp" android:singleLine="true" /> <TextView android:id="@+id/title" android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="17dp" android:layout_toRightOf="@id/thumb" android:layout_alignParentRight="true" android:layout_alignParentTop="true" android:paddingRight="4dp" android:layout_alignWithParentIfMissing="true" android:gravity="center" /> Thank you for your time

    Read the article

< Previous Page | 844 845 846 847 848 849 850 851 852 853 854 855  | Next Page >