Search Results

Search found 5998 results on 240 pages for 'rise against'.

Page 212/240 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Mysqld not starting due to apparent db corruption

    - by pitosalas
    I am very new at admining mysql, and bad for me, something caused the db to get clobbered. There are many error messages in the log that I am not sure how to safely proceed. Can you give some tips? Here's the log: 110107 15:07:15 mysqld started 110107 15:07:15 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 110107 15:07:15 InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 35 515914826. InnoDB: Doing recovery: scanned up to log sequence number 35 515915839 InnoDB: 1 transaction(s) which must be rolled back or cleaned up InnoDB: in total 1 row operations to undo InnoDB: Trx id counter is 0 1697553664 110107 15:07:15 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 InnoDB: Apply batch completed InnoDB: Starting rollback of uncommitted transactions InnoDB: Rolling back trx with id 0 1697553198, 1 rows to undoInnoDB: Error: trying to access page number 3522914176 in space 0, InnoDB: space name ./ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10 110107 15:07:15InnoDB: Assertion failure in thread 3086403264 in file fil0fil.c line 3922 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html InnoDB: about forcing recovery. mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=0 read_buffer_size=131072 max_used_connections=0 max_connections=100 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 217599 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=(nil) Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0xbffc55ac, backtrace may not be correct. Stack range sanity check OK, backtrace follows: 0x8139eec 0x83721d5 0x833d897 0x833db71 0x832aa38 0x835f025 0x835f7a3 0x830a77e 0x8326b57 0x831c825 0x8317b8d 0x82a9e66 0x8315732 0x834fc9a 0x828d7c3 0x81c29dd 0x81b5620 0x813d9fe 0x40fdf3 0x80d5ff1 New value of fp=(nil) failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. 110107 15:07:15 mysqld ended

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • MySQL crash. Unknown cause. Signal 11

    - by fortmac
    This is a database that I installed ~6 months ago and had been running fine. This is currently running in Ubuntu 12.04. Attempting to connect to MySQL causes this error: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) Then theres: $ sudo mysqld which returns: 130702 15:38:54 [Note] Plugin 'FEDERATED' is disabled. 130702 15:38:54 InnoDB: The InnoDB memory heap is disabled 130702 15:38:54 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130702 15:38:54 InnoDB: Compressed tables use zlib 1.2.3.4 130702 15:38:54 InnoDB: Initializing buffer pool, size = 128.0M 130702 15:38:54 InnoDB: Completed initialization of buffer pool 130702 15:38:54 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130702 15:38:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130702 15:38:55 InnoDB: Waiting for the background threads to start 130702 15:38:56 InnoDB: 1.1.8 started; log sequence number 5201901917 130702 15:38:56 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130702 15:38:56 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130702 15:38:56 [Note] Server socket created on IP: '127.0.0.1'. 130702 15:38:56 [Note] Event Scheduler: Loaded 0 events 130702 15:38:56 [Note] mysqld: ready for connections. Version: '5.5.28-0ubuntu0.12.04.3' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 19:39:02 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f9509e51530 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f94f1d3de60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f95083427b9] mysqld(handle_fatal_signal+0x483)[0x7f9508209b43] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f9506f5bcb0] mysqld(+0x320e1c)[0x7f9508113e1c] mysqld(_ZN4JOIN15alloc_func_listEv+0x9c)[0x7f950812391c] mysqld(_ZN4JOIN7prepareEPPP4ItemP10TABLE_LISTjS1_jP8st_orderS7_S1_S7_P13st_select_lexP18st_select_lex_unit+0x918)[0x7f9508124658] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x130)[0x7f950812d060] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f9508132fbc] mysqld(+0x2f6714)[0x7f95080e9714] mysqld(_Z21mysql_execute_commandP3THD+0x16d8)[0x7f95080f1178] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f95080f5e0f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f95080f7260] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f950819b80d] mysqld(handle_one_connection+0x50)[0x7f950819b870] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f9506f53e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f9506684cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f94e0004b80): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. I'm at a loss. What other reports would be useful in diagnosing this? /var/log/mysql.err & /var/log/mysql.log are empty.

    Read the article

  • How does the Cloud compare to Colocation? And development too

    - by David
    Currently I/we run a SaaS web application where each subscriber has their own physical instance of the application in addition to their own database. The setup has each web application instance deployed on two different IIS boxes both for load-balancing and redundancy (the machines have their Windows Update install times 12 hours apart, for example). Databases are mirrored on two different SQL Server 2012 machines with AlwaysOn for uptime. I don't make use of SQL Server clustering (as it doesn't provide storage-level failover: we don't have a shared storage box). Because it's a Windows setup it means there are two Domain Controllers (we cheat: they're both Mac Minis, 17W each, which keeps our colo power costs low). Finally there's also an Exchange server (Mailbox, Hub Transport and Client Access). One of the SQL Servers also doubles-up as an Exchange Hub Transport. Running costs are about $700 a month for our quarter-rack colocation (which includes power and peering/transfer), then there's about $150 a month for SPLA licensing, so $850 a month in total. Then there's the hard-to-quantify cost of administration, but I reckon I spend a couple of hours a week checking-in on the servers: reviewing event logs, etc. I keep getting bombarded by ads and manufactured news stories about how great "the cloud" is. Back in 2008 when the cloud was taking off I was reading up about the proper "cloud" services like Google AppEngine, where you write in Python against Google's API and that's how they scale your application across servers and also use their database provider for scaling storage. Simple enough to understand. Then came along Amazon, and I understand how Amazon Storage works, but I'm not sure how Amazon Compute works: web application pages don't take much CPU time to compute, how do you even quantify usage anyway? Finally, RackSpace gets in the act and now I'm really confused. RackSpace advertise "Cloud" SQL Server 2012 available for about "$0.70 per hour", going by how they advertise it I thought the "hour" meant the sum of CPU time, IO blocking time, maybe time spent transferring data, so for a low-intensity application that works out pretty cheap then? Nope. I went on to a Sales Chat window and spoke to one of their advisors. They told me the $0.70/hour was actually for every hour the SQL Server is running... but who wants a SQL Server for only a few hours? You're going to need it available 24 hours a day for months on end. $0.70 * 24 * 31 works out at $520 a month, which is rediculously expensive for SQL Server. An SPLA license for SQL Server is only $50 a month or so. That $520 a month does not include "fanatical support", and you also need to stack on top the costs of the host Windows server instance too. From what I can tell, Rackspace's "Cloud" products seem like like an cynical rebranding of an overpriced VPS service, but priced by the hour. I have the same confusion about Windows Azure which uses similar terms to describe the products available, but I think that's because Azure offers both traditional shared webhosting in addition to their own APIs you can target for scalable applications.

    Read the article

  • Facebook graph api photo upload to a fan page album

    - by kielie
    Hi guys, I have gotten the photo upload function to work with this code, <?php include_once 'facebook-php-sdk/src/facebook.php'; include_once 'config.php';//this file contains the secret key and app id etc... $facebook = new Facebook(array( 'appId' => FACEBOOK_APP_ID, 'secret' => FACEBOOK_SECRET_KEY, 'cookie' => true, 'domain' => 'your callback url goes here' )); $session = $facebook->getSession(); if (!$session) { $url = $facebook->getLoginUrl(array( 'canvas' => 1, 'fbconnect' => 0, 'req_perms'=>'user_photos,publish_stream,offline_access'//here I am requesting the required permissions, it should work with publish_stream alone, but I added the others just to be safe )); echo 'You are not logged in, please <a href="' . $facebook->getLoginUrl() . '">Login</a> to access this application'; } else{ try { $uid = $facebook->getUser(); $me = $facebook->api('/me'); $token = $session['access_token'];//here I get the token from the $session array $album_id = 'the id of the album you wish to upload to eg: 1122'; //upload your photo $file= 'test.jpg'; $args = array( 'message' => 'Photo from application', ); $args[basename($file)] = '@' . realpath($file); $ch = curl_init(); $url = 'https://graph.facebook.com/'.$album_id.'/photos?access_token='.$token; curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $args); $data = curl_exec($ch); //returns the id of the photo you just uploaded print_r(json_decode($data,true)); } catch(FacebookApiException $e){ echo "Error:" . print_r($e, true); } } ?> I hope this helps, a friend and I smashed our heads against a wall for quite some time to get this working! Anyways, here is my question, how can I upload a image to a fan page? I am struggling to get this working, when I upload the image all I get is the photo id but no photo in the album. So basically, when the user clicks the upload button on our application, I need it to upload the image they created to our fan page's album with them tagged on it. Anyone know how I can accomplish this?

    Read the article

  • WPF CommandParameter is NULL first time CanExecute is called

    - by Jonas Follesø
    I have run into an issue with WPF and Commands that are bound to a Button inside the DataTemplate of an ItemsControl. The scenario is quite straight forward. The ItemsControl is bound to a list of objects, and I want to be able to remove each object in the list by clicking a Button. The Button executes a Command, and the Command takes care of the deletion. The CommandParameter is bound to the Object I want to delete. That way I know what the user clicked. A user should only be able to delete their "own" objects - so I need to do some checks in the "CanExecute" call of the Command to verify that the user has the right permissions. The problem is that the parameter passed to CanExecute is NULL the first time it's called - so I can't run the logic to enable/disable the command. However, if I make it allways enabled, and then click the button to execute the command, the CommandParameter is passed in correctly. So that means that the binding against the CommandParameter is working. The XAML for the ItemsControl and the DataTemplate looks like this: <ItemsControl x:Name="commentsList" ItemsSource="{Binding Path=SharedDataItemPM.Comments}" Width="Auto" Height="Auto"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <Button Content="Delete" FontSize="10" Command="{Binding Path=DataContext.DeleteCommentCommand, ElementName=commentsList}" CommandParameter="{Binding}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> So as you can see I have a list of Comments objects. I want the CommandParameter of the DeleteCommentCommand to be bound to the Command object. So I guess my question is: have anyone experienced this problem before? CanExecute gets called on my Command, but the parameter is always NULL the first time - why is that? Update: I was able to narrow the problem down a little. I added an empty Debug ValueConverter so that I could output a message when the CommandParameter is data bound. Turns out the problem is that the CanExecute method is executed before the CommandParameter is bound to the button. I have tried to set the CommandParameter before the Command (like suggested) - but it still doesn't work. Any tips on how to control it. Update2: Is there any way to detect when the binding is "done", so that I can force re-evaluation of the command? Also - is it a problem that I have multiple Buttons (one for each item in the ItemsControl) that bind to the same instance of a Command-object? Update3: I have uploaded a reproduction of the bug to my SkyDrive: http://cid-1a08c11c407c0d8e.skydrive.live.com/self.aspx/Code%20samples/CommandParameterBinding.zip

    Read the article

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • Very slow compile times on Visual Studio

    - by johnc
    We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines. A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash) We are looking at combing projects (not nice, as we like the separation of concerns, but is a good opportunity to refactor away some dead wood). We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a dll hell as we try to keep things in synch. I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages UPDATE Apologies, I neglected to mention this is a C# solution. Thanks for all the cpp suggestions, but it's been a few years since I've had to worry about headers. At a distance I say I miss C++, but I'm not sure I want to go back EDIT: Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped) New 3GHz laptop - the power of lost utilization works wonders when whinging to management Disable Anti Virus during compile 'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI Still not rip-snorting through a compile, but every bit helps. Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance WORKAROUND We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them. We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.

    Read the article

  • iPhone 3DES encryption key length issue

    - by Russell Hill
    Hi, I have been banging my head on a wall with this one. I need to code my iPhone application to encrypt a 4 digit "pin" using 3DES in ECB mode for transmission to a webservice which I believe is written in .NET. + (NSData *)TripleDESEncryptWithKey:(NSString *)key dataToEncrypt:(NSData*)encryptData { NSLog(@"kCCKeySize3DES=%d", kCCKeySize3DES); char keyBuffer[kCCKeySize3DES+1]; // room for terminator (unused) bzero( keyBuffer, sizeof(keyBuffer) ); // fill with zeroes (for padding) [key getCString: keyBuffer maxLength: sizeof(keyBuffer) encoding: NSUTF8StringEncoding]; // encrypts in-place, since this is a mutable data object size_t numBytesEncrypted = 0; size_t returnLength = ([encryptData length] + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1); // NSMutableData* returnBuffer = [NSMutableData dataWithLength:returnLength]; char* returnBuffer = malloc(returnLength * sizeof(uint8_t) ); CCCryptorStatus ccStatus = CCCrypt(kCCEncrypt, kCCAlgorithm3DES , kCCOptionECBMode, keyBuffer, kCCKeySize3DES, nil, [encryptData bytes], [encryptData length], returnBuffer, returnLength, &numBytesEncrypted); if (ccStatus == kCCParamError) NSLog(@"PARAM ERROR"); else if (ccStatus == kCCBufferTooSmall) NSLog(@"BUFFER TOO SMALL"); else if (ccStatus == kCCMemoryFailure) NSLog(@"MEMORY FAILURE"); else if (ccStatus == kCCAlignmentError) NSLog(@"ALIGNMENT"); else if (ccStatus == kCCDecodeError) NSLog(@"DECODE ERROR"); else if (ccStatus == kCCUnimplemented) NSLog(@"UNIMPLEMENTED"); if(ccStatus == kCCSuccess) { NSLog(@"TripleDESEncryptWithKey encrypted: %@", [NSData dataWithBytes:returnBuffer length:numBytesEncrypted]); return [NSData dataWithBytes:returnBuffer length:numBytesEncrypted]; } else return nil; } } I do get a value encrypted using the above code, however it does not match the value from the .NET web service. I believe the issue is that the encryption key I have been supplied by the web service developers is 48 characters long. I see that the iPhone SDK constant "kCCKeySize3DES" is 24. So I SUSPECT, but don't know, that the commoncrypto API call is only using the first 24 characters of the supplied key. Is this correct? Is there ANY way I can get this to generate the correct encrypted pin? I have output the data bytes from the encryption PRIOR to base64 encoding it and have attempted to match this against those generated from the .NET code (with the help of a .NET developer who sent the byte array output to me). Neither the non-base64 encoded byte array nor the final base64 encoded strings match.

    Read the article

  • a4j:jsFunction with actionListener inside of h:dataTable

    - by JQueryNeeded
    Hello all, I'm having problem with using a4j:jsFunction with actionListener inside of h:dataTable, when I want to invoke an action over particular row with a4j:commandLink it works flawless but when I want to invoke the action with a4j:jsFunction & actionListener it's always invoked over the last element in dataTable Let me give you an example: <a4j:form ajaxSubmit="true" reRender="mainForm" id="mainForm"> <a4j:region> <t:saveState value="#{ts.list}" /> </a4j:region> <h:dataTable value="#{ts.list}" var="el" binding="#{ts.bind}"> <h:column>#{el}</h:column>> <h:column> <a4j:commandLink actionListener="#{ts.rem}"> <h:outputText value="delete by CMDLink" /> </a4j:commandLink> </h:column> <h:column> <a href="#" onclick="okClicked();">delete by okClicked</a> <a4j:jsFunction name="okClicked" actionListener="#{ts.rem}" /> </h:column> </h:dataTable> </a4j:form> now, the bean's code: package com.sth; import java.util.ArrayList; import java.util.List; import javax.faces.component.UIData; import javax.faces.event.ActionEvent; public class Ts { private List<String> list = new ArrayList<String>(); private UIData bind; public Ts(){ list.add("element1"); list.add("element2"); list.add("element3"); list.add("element4"); } public List<String> getList() { return list; } public void setList(List<String> list) { this.list = list; } public void rem(ActionEvent ae) { String toRem = (String) bind.getRowData(); System.out.println("Deleting " + toRem); list.remove(toRem); } public UIData getBind() { return bind; } public void setBind(UIData bind) { this.bind = bind; } } when I use a4j:commandLink to remove element, it works as its expected, but when I use a4j:jsFunction to invoke actionListener it invokes action against last element :( Any ideas? Cheers

    Read the article

  • Integrating POP3 client functionality into a C# application?

    - by flesh
    I have a web application that requires a server based component to periodically access POP3 email boxes and retrieve emails. The service then needs to process the emails which will involve: Validating the email against some business rules (does it contain a valid reference in the subject line, which user sent the mail, etc.) Analysing and saving any attachments to disk Take the email body and attachment details and create a new item in the database Or update an existing item where the reference matches the incoming email subject line What is the best way to approach this? I really don't want to have to write a POP3 client from scratch, but I need to be able to customize the processing of emails. Ideally I would be able to plug in some component that does the access and retrieval for me, returning arrays of attachments, body text, subject line, etc. ready for my processing... [ UPDATE: Reviews ] OK, so I have spent a fair amount of time looking into (mainly free) .NET POP3 libraries so I thought I'd provide a short review of some of those mentioned below and a few others: Pop3.net - free - works OK, very basic in terms of functionality provided. This is pretty much just the POP3 commands and some base64 encoding, but it's very straight forward - probably a good introduction Pop3 Wizard - commercial / some open source code - couldn't get this to build, missing DLLs, I wouldn't bother with this C#Mail - free - works well, comes with Mime parser and SMTP client, however the comments are in Japanese (not a big deal) and it didn't work with SSL 'out of the box' - I had to change the SslStream constructor after which it worked no problem OpenPOP - free - hasn't been updated for about 5 years so it's current state is .NET 1.0, doesn't support SSL but that was no problem to resolve - I just replaced the existing stream with an SslStream and it worked. Comes with Mime parser. Of the free libraries, I'd go for C#Mail or OpenPOP. I looked at a few commercial libraries: Chillkat, Rebex, RemObjects, JMail.net. Based on features, price and impression of the company I would probably go for Rebex and may in the future if my requirements change or I run into production issues with either of C#Mail or OpenPOP. In case anyone's needs it, this is the replacement SslStream constructor that I used to enable SSL with C#Mail and OpenPOP: SslStream stream = new SslStream(clientSocket.GetStream(), false, delegate(object sender, X509Certificate cert, X509Chain chain, SslPolicyErrors errors) { return true; });

    Read the article

  • Removing expired certificates from LDS (new ver of ADAM)

    - by jonthebrewer
    Hi all. This is my situation: We are in the process of replacing a certificate store currently hosted on Sun's iPlanet with Microsoft's Lightweight Directory Services (new version of ADAM with Server 2008). These certificates have been imported into LDS into an application partition (say o=myorg, C=AU). Under this structure I have around 40,000 OU's each one representing a customer under each customers OU are one or more user (iNetOrg) objects (around 60,000 in all). In each user are one or more certificates in the UserCertificate attribute. A combination of in-house written application code and proprietory PKI code reads and publishes these certficates to validate financial transactions. As the LDAP path of the certificates is stored within the customer certificates (and within the application code) and there is zero appetite for changing any of the code, I have had to pick up the iPlanet directory as a whole and dump it in LDS in the same structure. (I will not be using or hosting a Microsoft CA, just implementing an LDAP compliant directory to host these certificates) We have fully tested the application using the data in LDS and everything works fine - here is my dilema and question (finally, phew!) There was no process put in place for removing revoked or expired certificates, consequently the vast majority of the data is completely useless, the system has been running for about 8 years! I have done a quick analysis and I estimate that at least 80% of the data is no longer valid. As I am taking on responsibility for managing the directory I would like to start with a clean directory. Does anyone have any idea how I can cleanup these expired certificates. I am not a highly experienced scripter but have some background in VB. I have been researching the use of CAPICOM and have a feeling this may be able to be used but in exactly what way I am not sure?? I would prefer to write a script that I could specify an expiration date (say any certs that expired prior to 2010) then run against the LDS paritition. This way I can reuse the script periodically to cleanup the directory (as mentioned above - I have no way to adjust the applications that are writing the certs, this is with a third party). Another, less attractive, alternative is to massage the LDIF file (2.7 million lines!) to rip the certs out prior to the import Any help and advice MUCH appreciated. Cheers Jon

    Read the article

  • Binding a wpf listbox to a combobox

    - by user293545
    Hi there, I have created a very basic wpf application that I want to use to record time entries against different projects. I havent used mvvm for this as I think its an overkill. I have a form that contains a combobox and a listbox. I have created a basic entity model like this What I am trying to do is bind the combobox to Project and whenever I select an item from the combobox it updates the listview with the available tasks associated with that project. This is my xaml so far. I dont have any code behind as I have simply clicked on that Data menu and then datasources and dragged and dropped the items over. The application loads ok and the combobox is been populated however nothing is displaying in the listbox. Can anyone tell me what I have missed? <Window.Resources> <CollectionViewSource x:Key="tasksViewSource" d:DesignSource="{d:DesignInstance l:Task, CreateList=True}" /> <CollectionViewSource x:Key="projectsViewSource" d:DesignSource="{d:DesignInstance l:Project, CreateList=True}" /> </Window.Resources> <Grid DataContext="{StaticResource tasksViewSource}"> <l:NotificationAreaIcon Text="Time Management" Icon="Resources\NotificationAreaIcon.ico" MouseDoubleClick="OnNotificationAreaIconDoubleClick"> <l:NotificationAreaIcon.MenuItems> <forms:MenuItem Text="Open" Click="OnMenuItemOpenClick" DefaultItem="True" /> <forms:MenuItem Text="-" /> <forms:MenuItem Text="Exit" Click="OnMenuItemExitClick" /> </l:NotificationAreaIcon.MenuItems> </l:NotificationAreaIcon> <Button Content="Insert" Height="23" HorizontalAlignment="Left" Margin="150,223,0,0" Name="btnInsert" VerticalAlignment="Top" Width="46" Click="btnInsert_Click" /> <ComboBox Height="23" HorizontalAlignment="Left" Margin="70,16,0,0" Name="comProjects" VerticalAlignment="Top" Width="177" DisplayMemberPath="Project1" ItemsSource="{Binding Source={StaticResource projectsViewSource}}" SelectedValuePath="ProjectID" /> <Label Content="Projects" Height="28" HorizontalAlignment="Left" Margin="12,12,0,0" Name="label1" VerticalAlignment="Top" IsEnabled="False" /> <Label Content="Tasks" Height="28" HorizontalAlignment="Left" Margin="16,61,0,0" Name="label2" VerticalAlignment="Top" /> <ListBox Height="112" HorizontalAlignment="Left" Margin="16,87,0,0" Name="lstTasks" VerticalAlignment="Top" Width="231" DisplayMemberPath="Task1" ItemsSource="{Binding Path=ProjectID, Source=comProjects}" SelectedValuePath="TaskID" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="101,224,0,0" Name="txtMinutes" VerticalAlignment="Top" Width="42" /> <Label Content="Mins to Insert" Height="28" HorizontalAlignment="Left" Margin="12,224,0,0" Name="label3" VerticalAlignment="Top" /> <Button Content="None" Height="23" HorizontalAlignment="Left" Margin="203,223,0,0" Name="btnNone" VerticalAlignment="Top" Width="44" /> </Grid>

    Read the article

  • MVC2 Controller is passed a null object as a parameter

    - by Steve Wright
    I am having an issue with a controller getting a null object as a parameter: [HttpGet] public ActionResult Login() { return View(); } [HttpPost] public ActionResult Login(LoginViewData userLogin) { Assert.IsNotNull(userLogin); // FAILS if (ModelState.IsValid) { } return View(userLogin); } The LoginViewData is being passed as null when the HttpPost is called: Using MvcContrib.FluentHtml: <h2>Login to your Account</h2> <div id="contact" class="rounded-10"> <%using (Html.BeginForm()) { %> <fieldset> <ol> <li> <%= this.TextBox(f=>f.UserLogin).Label("Name: ", "name") %> <%= Html.ValidationMessageFor(m => m.UserLogin) %> </li> <li> <%= this.Password(u => u.UserPassword).Label("Password:", "name") %> <%= Html.ValidationMessageFor(m => m.UserPassword) %> </li> <li> <%= this.CheckBox(f => f.RememberMe).LabelAfter("Remember Me")%> </li> <li> <label for="submit" class="name">&nbsp;</label> <%= this.SubmitButton("Login")%> </li> </ol> </fieldset> <% } %> <p>If you forgot your user name or password, please use the Password Retrieval Form.</p> </div> The view inherits from MvcContrib.FluentHtml.ModelViewPage and is strongly typed against the LoginViewData object: public class LoginViewData { [Required] [DisplayName("User Login")] public string UserLogin { get; set; } [Required] [DisplayName("Password")] public string UserPassword { get; set; } [DisplayName("Remember Me?")] public bool RememberMe { get; set; } } Any ideas on why this would be happening? UPDATE I rebuilt the web project from scratch and that fixed it. I am still concerned why it happened.

    Read the article

  • Server side Xforms form validation and integration into ASP.NET

    - by Nigel
    I have recently been investigating methods of creating web-based forms for an ASP.NET web application that can be edited and managed at runtime. For example an administrator might wish to add a new validation rule or a new set of fields. The holy grail would provide a means of specifying a form along with (potentially very complex) arbitrary validation rules, and allocation of data sources for each field. The specification would then be used to update the deployed form in the web application which would then validate submissions both on the client side and on the server side. My investigations led me to Xforms and a number of technologies that support it. One solution appears to be IBM Lotus Forms, but this requires a very large investment in terms of infrastructure, which makes it infeasible, although the forms designer may be useful as a stand-alone tool for creating the forms. I have also discounted browser plug-ins as the form must be publicly visible and cross-browser compliant. I have noticed that there are numerous javascript libraries that provide client side implementations given an Xforms schema. These would provide a partial solution but server side validation is still a requirement. Another option seems to involve the use of server side solutions such as the Java application Orbeon. Orbeon provides a tool for specifying the forms (although not as rich as Lotus Forms Designer), but the most interesting point is that it can translate an XForms schema into an XHTML form complete with validation. The fact that it is written in Java is not a big problem if it is possible to integrate with the existing ASP.NET application. So my question is whether anyone has done this before. It sounds like a problem that should have been solved but is inherently very complex. It seems possible to use an off-the-shelf tool to design the form and export it to an Xforms schema and xhtml form, and it seems possible to take that xforms schema and form and publish it using a client side library. What seems to be difficult is providing a means of validating the form submission on the server side and integrating the process nicely with .NET (although it seems the .NET community doesn't involve themselves with XForms; please correct me if I'm wrong on this count). I would be more than happy if a product provided something simple like a web service that could validate a submission against a schema. Maybe Orbeon does this but I'd be grateful if somebody in the know could point me in the right direction before I research it further. Many thanks.

    Read the article

  • Which functions in the C standard library commonly encourage bad practice?

    - by Ninefingers
    Hello all, This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of. Specifically, to quote R.. strncpy does not null-terminate, and does null-pad the whole remainder of the destination buffer, which is a huge waste of time. You can work around the former by adding your own null padding, but not the latter. It was never intended for use as a "safe string handling" function, but for working with fixed-size fields in Unix directory tables and database files. snprintf(dest, n, "%s", src) is the only correct "safe strcpy" in standard C, but it's likely to be a lot slower. By the way, truncation in itself can be a major bug and in some cases might lead to privilege elevation or DoS, so throwing "safe" string functions that truncate their output at a problem is not a way to make it "safe" or "secure". Instead, you should ensure that the destination buffer is the right size and simply use strcpy (or better yet, memcpy if you already know the source string length). And from Jonathan Leffler Note that strncat() is even more confusing in its interface than strncpy() - what exactly is that length argument, again? It isn't what you'd expect based on what you supply strncpy() etc - so it is more error prone even than strncpy(). For copying strings around, I'm increasingly of the opinion that there is a strong argument that you only need memmove() because you always know all the sizes ahead of time and make sure there's enough space ahead of time. Use memmove() in preference to any of strcpy(), strcat(), strncpy(), strncat(), memcpy(). So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question: What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies? In the interests of objectivity, I have a number of criteria for an answer: Please, if you can, cite design reasons behind the function in question i.e. its intended purpose. Please highlight the misuse to which the code is currently put. Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers. Please avoid: Debates over naming conventions of functions (except where this unequivocably causes confusion). "I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them. As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away. I am also working as per C99.

    Read the article

  • Problem in linking an nasm code

    - by Stefano
    I'm using a computer with an Intel Core 2 CPU and 2GB of RAM. The SO is Ubuntu 9.04. When I try to compile this code: ;programma per la simulazione di un terminale su PC, ottenuto utilizzando l'8250 ;in condizione di loopback , cioè Tx=Rx section .code64 section .data TXDATA EQU 03F8H ;TRASMETTITORE RXDATA EQU 03F8H ;RICEVITORE BAUDLSB EQU 03F8H ;DIVISORE DI BAUD RATE IN LSB BAUDMSB EQU 03F9H ;DIVISORE DI BAUD RATE IN MSB INTENABLE EQU 03F9H ;REGISTRO DI ABILITAZIONE DELL'INTERRUZIONE INTIDENTIF EQU 03FAH ;REGISTRO DI IDENTIFICAZIONE DELL'INTERRUZIONE LINECTRL EQU 03FBH ;REGISTRO DI CONTROLLO DELLA LINEA MODEMCTRL EQU 03FCH ;REGISTRO DI CONTROLLO DEL MODEM LINESTATUS EQU 03FDH ;REGISTRO DI STATO DELLA LINEA MODEMSTATUS EQU 03FEH ;REGISTRO DI STATO DEL MODEM BAUDRATEDIV DW 0060H ;DIVISOR: LOW=60, HIGH=00 -BAUD =9600 COUNTERCHAR DB 0 ;CHARACTER COUNTER ;DW 256 DUP (?) section .text global _start _start: ;PROGRAMMAZIONE 8250 MOV DX,LINECTRL MOV AL,80H ;BIT 7=1 PER INDIRIZZARE IL BAUD RATE OUT DX,AL MOV DX,BAUDLSB MOV AX,BAUDRATEDIV ;DEFINISCO FATTORE DI DIVISIONE OUT DX,AL MOV DX,BAUDMSB MOV AL,AH OUT DX,AL ;MSB MOV DX,LINECTRL MOV AL,00000011B ;8 BIT DATO, 1 STOP, PARITA' NO OUT DX,AL MOV DX,MODEMCTRL MOV AL,00010011B ;BIT 4=0 PER NO LOOPBACK OUT DX,AL MOV DX,INTENABLE XOR AL,AL ;DISABILITO TUTTI GLI INTERRUPTS OUT DX,AL CICLO: MOV DX,LINESTATUS IN AL,DX ;LEGGO IL REGISTRO DI STATO DELLA LINEA TEST AL,00011110B ;VERIFICO GLI ERRORI (4 TIPI) JNE ERRORI TEST AL,01H ;VERIFICO Rx PRONTO JNE LEGGOCHAR TEST AL,20H ;VERIFICO Tx VUOTO JE CICLO ;SE SI ARRIVA A QUESTO PUNTO ALLORA L'8250 è PRONTO PER TRASMETTERE UN NUOVO CARATTERE MOV AH,1 INT 80H JE CICLO ;SE SI ARRIVA A QUESTO PUNTO SIGNIFICA CHE ESISTE UN CARATTERE DA TASTIERA MOV AH,0 INT 80H ;Al CONTIENE IL CARATTERE DELLA TASTIERA MOV DX,3F8H OUT DX,AL JMP CICLO LEGGOCHAR: MOV AL,[COUNTERCHAR] INC AL CMP AL,15 JE FINE MOV [COUNTERCHAR],AL MOV DX,TXDATA IN AL,DX ;AL CONTIENE IL CARATTERE RICEVUTO AND AL,7FH ;POICHè VI SONO 7 BIT DI DATO ;VISUALIZZAZIONE DEL CARATTERE MOV BX,0 MOV AH,14 INT 80H POP AX CMP AL,0DH ;CONTROLLO SE RETURN JNE CICLO ;CAMBIO RIGA DI VISUALIZZAZIONE MOV AL,0AH MOV BX,0 MOV AH,14 ;INT 10H INT 80H JMP CICLO ;GESTIONE ERRORI ERRORI: MOV DX,3F8H IN AL,DX MOV AL,'?' MOV BX,0 MOV AH,14 INT 80H JMP CICLO FINE: XOR AH,AH MOV AL,03 INT 80H When I compile this code "NASM -f bin UARTLOOP.asm", the compiler can create the UARTLOOP.o file without any error. When I try to link the .o file with "ld UARTLOOP.o" it tells: UARTLOOP.o: In function `_start': UARTLOOP.asm:(.text+0xd): relocation truncated to fit: R_X86_64_16 against `.data' Have u got some ideas to solve this problem? Thx =)

    Read the article

  • multiple stateful iframes per page will overwrite JSESSIONID?

    - by Nikita
    Hello, Looking for someone to either confirm or refute my theory that deploying two iframes pointing to two different stateful pages on the same domain can lead to JSESSIONIDs being overwritten. Here's what I mean: Setup suppose you have two pages that require HttpSession state (session affinity) to function correctly - deployed at http://www.foo.com/page1 and http://www.foo.com/page2 assume www.foo.com is a single host running a Tomcat (6.0.20, fwiw) that uses JSESSIONID for session id's. suppose these pages are turned into two iframe widgets to be embedded on 3rd party sites: http://www.site.com/page1" / (and /page2 respectively) suppose there a 3rd party site that wishes to place both widgets on the same page at http://www.bar.com/foowidgets.html Can the following race condition occur? a new visitor goes to http://www.bar.com/foowidgets.html browser starts loading URLs in foowidgets.html including the two iframe 'src' URLs because browsers open multiple concurrent connections against the same host (afaik up to 6 in chrome/ff case) the browser happens to simultaneously issue requests for http://www.foo.com/page1 and http://www.foo.com/page2 The tomcat @ foo.com receives both requests at about the same time, calls getSession() for the first time (on two different threads) and lazily creates two HttpSessions and, thus, two JSESSIONIDs, with values $Page1 and $Page2. The requests also stuff data into respective sessions (that data will be required to process subsequent requests) assume that the browser first receives response to the page1 request. Browser sets cookie JSESSIONID=$Page1 for HOST www.foo.com next response to the page2 request is received and the browser overwrites cookie JSESSIONID for HOST www.foo.com with $Page2 user clicks on something in 'page1' iframe on foowidgets.html; browser issues 2nd request to http://www.foo.com/page1?action=doSomethingStateful. That request carries JSESSIONID=$Page2 (and not $Page1 - because cookie value was overwritten) when foo.com receives this request it looks up the wrong HttpSession instance (because JSESSIONID key is $Page2 and NOT $Page1). Foobar! Can the above happen? I think so, but would appreciate a confirmation. If the above is clearly possible, what are some solutions given that we'd like to support multiple iframes per page? We don't have a firm need for the iframes to share the same HttpSession, though that would be nice. In the event that the solution will still stipulate a separate HttpSession per iframe, it is - of course - mandatory that iframe 1 does not end up referencing httpSession state for iframe 2 instead of own. off top of my head I can think of: map page1 and page2 to different domains (ops overhead) use URL rewriting and never cookies (messes up analytics) anything else? thanks a lot, -nikita

    Read the article

  • T-SQL selecting values that match ISNUMERIC and also are within a specified range. (plus Linq-to-sql

    - by Toby
    I am trying to select rows from a table where one of the (NVARCHAR) columns is within a numeric range. SELECT ID, Value FROM Data WHERE ISNUMERIC(Value) = 1 AND CONVERT(FLOAT, Value) < 66.6 Unfortunately as part of the SQL spec the AND clauses don't have to short circuit (and don't on MSSQL Server EE 2008). More info: http://stackoverflow.com/questions/789231/is-the-sql-where-clause-short-circuit-evaluated My next attempt was to try this to see if I could achieve delayed evaluation of the CONVERT SELECT ID, Value FROM Data WHERE (CASE WHEN ISNUMERIC(Value) = 1 THEN CONVERT(FLOAT, Value) < 66.6 ELSE 0 END) but I cannot seem to use a < (or any comparison) with the result of a CONVERT. It fails with the error Incorrect syntax near '<'. I can get away with SELECT ID, CONVERT(FLOAT, Value) AS Value FROM Data WHERE ISNUMERIC(Value) = 1 So the obvious solution is to wrap the whole select statement in another SELECT and WHERE and return the converted values from the inner select and filter in there where of the outer select. Unfortunately this is where my Linq-to-sql problem comes in. I am filtering not only by one range but potentialy by many, or just by the existance of the record (there are some date range selects and comparisons I've left out.) Essentially I would like to be able to generate something like this: SELECT ID, TypeID, Value FROM Data WHERE (TypeID = 4 AND ISNUMERIC(Value) AND CONVERT(Float, Value) < 66.6) OR (TypeID = 8 AND ISNUMERIC(Value) AND CONVERT(Float, Value) > 99) OR (TypeID = 9) (With some other clauses in each of those where options.) This clearly doesn't work if I filter out the non-ISNUMERIC values in an inner select. As I mentioned I am using Linq-to-sql (and PredicateBulider) to build up these queries but unfortunately Datas.Where(x => ISNUMERIC(x.Value) ? Convert.ToDouble(x.Value) < 66.6 : false) Gets converted to this which fails the initial problem. WHERE (ISNUMERIC([t0].[Value]) = 1) AND ((CONVERT(Float,[t0].[Value])) < @p0) My last resort will have to be to outer join against a double select on the same table for each of the comparisons but this isn't really an idea solution. I was wondering if anyone has run into similar issues before?

    Read the article

  • How to arrange models, views, controllers in a Kohana 3 project

    - by Pekka
    I'm looking at how to set up a mid-sized web application with Kohana 3. I have implemented MVC patterns in the past but never worked against a "formalized" MVC framework so I'm still getting my head around the terminology - toying around with basic examples, building views and templates, and so on. I'm progressing fairly well but I want to set up a real-world web project (one of my own that I've been planning for quite some time now) as a learning object. I learn best by example, but example-based documentation is a bit sparse for Kohana 3 right now - they say so themselves on the site. While I'm not worried about getting into the framework soon enough, I'm a bit concerned about arranging a healthily structured code base from the start - i.e. how to split up controllers, how to name them, and how to separate the functionality into the appropriate models. My application could, in its core, be described as a business directory with a main businesses table. Businesses can be listed by category and by street name. Each business has a detail page. Business owners can log in and edit their business's entry. Businesses can post offers into an offers table. I know this is not very detailed, but I don't want to cram too much information into this question. I'll be more than happy to go into more detail if needed. Supposing I have all the basic functionality worked out and in place already - list all businesses, edit business, list businesses by street name, create offer logged in as business, and so on, and I'm just looking for how to fit the functionality into a MVC pattern and into a Kohana application structure that can be easily extended: Do you know real-life, publicly accessible examples of "database-heavy" applications like directories, online communities... with a log-in area built on Kohana 3 where I could take a peek how they do it? Are there conventions or best practices on how to structure an extendable login area for end users in a Kohana project that is not only able to handle a business directory page, but further products on separate pages as well? Do you know application structuring HOWTOs or best practices for Kohana 3 not mentioned in the user guide and the inofficial Wiki? Have you built something similar and could give me some recommendations?

    Read the article

  • Disable Autocommit in H2 with Hibernate/C3P0 ?

    - by HDave
    I have a JPA/Hibernate application and am trying to get it to run against H2 (as well as other databases). Currently I am using Atomikos for transaction and C3P0 for connection pooing. Despite my best efforts I am still seeing this in the log file (and DAO integration tests are failing): [20100613 23:06:34] DEBUG [main] SessionFactoryImpl.(242) | instantiating session factory with properties: .....edited for brevity.... hibernate.connection.autocommit=true, ....more stuff follows The connection URL to H2 has AUTOCOMMIT=OFF, but according to the H2 documentation: this will not work as expected when using a connection pool (the connection pool manager will re-enable autocommit when returning the connection to the pool, so autocommit will only be disabled the first time the connection is used So I figured (apparently correctly) that Hibernate is where I'll have to indicate I want autocommit off. I found the autocommit property documented here and I put it in my EntityManagerFactory config as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • Lambda Expression to be used in Select() query

    - by jameschinnock
    Hi, I am trying to build a lambda expression, containing two assignments (as shown further down), that I can then pass to a Queryable.Select() method. I am trying to pass a string variable into a method and then use that variable to build up the lambda expression so that I can use it in a LINQ Select query. My reasoning behind it is that I have a SQL Server datasource with many column names, I am creating a charting application that will allow the user to select, say by typing in the column name, the actual column of data they want to view in the y-axis of my chart, with the x-axis always being the DateTime. Therefore, they can essentially choose what data they chart against the DateTime value (it’s a data warehouse type app). I have, for example, a class to store the retrieved data in, and hence use as the chart source of: public class AnalysisChartSource { public DateTime Invoicedate { get; set; } public Decimal yValue { get; set; } } I have (purely experimentaly) built an expression tree for the Where clause using the String value and that works fine: public void GetData(String yAxis) { using (DataClasses1DataContext db = new DataClasses1DataContext()) { var data = this.FunctionOne().AsQueryable<AnalysisChartSource>(); //just to get some temp data in.... ParameterExpression pe = Expression.Parameter(typeof(AnalysisChartSource), "p"); Expression left = Expression.MakeMemberAccess(pe, typeof(AnalysisChartSource).GetProperty(yAxis)); Expression right = Expression.Constant((Decimal)16); Expression e2 = Expression.LessThan(left, right); Expression expNew = Expression.New(typeof(AnalysisChartSource)); LambdaExpression le = Expression.Lambda(left, pe); MethodCallExpression whereCall = Expression.Call( typeof(Queryable), "Where", new Type[] { data.ElementType }, data.Expression, Expression.Lambda<Func<AnalysisChartSource, bool>>(e2, new ParameterExpression[] { pe })); } } However……I have tried a similar approach for the Select statement, but just can’t get it to work as I need the Select() to populate both X and Y values of the AnalysisChartSource class, like this: .Select(c => new AnalysisChartSource { Invoicedate = c.Invoicedate, yValue = c.yValue}).AsEnumerable(); How on earth can I build such an expression tree….or….possibly more to the point…..is there an easier way that I have missed entirely?

    Read the article

  • Configuring Unity with a closed generic constructor parmater

    - by fearofawhackplanet
    I've been trying to read the article here but I still can't understand it. I have a constructor resembling the following: IOrderStore orders = new OrderStore(new Repository<Order>(new OrdersDataContext())); The constructor for OrderStore: public OrderStore(IRepository<Order> orderRepository) Constructor for Repository<T>: public Repository(DataContext dataContext) How do I set this up in the Unity config file? UPDATE: I've spent the last few hours banging my head against this, and although I'm not really any closer to getting it right I think at least I can be a little more specific about the problem. I've got my IRespository<T> working ok: <typeAlias alias="IRepository" type="MyAssembly.IRepository`1, MyAssembly" /> <typeAlias alias="Repository" type="MyAssembly.Repository`1, MyAssembly" /> <typeAlias alias="OrdersDataContext" type="MyAssembly.OrdersDataContext, MyAssembly" /> <types> <type type="OrdersDataContext"> <typeConfig> <constructor /> <!-- ensures paramaterless constructor used --> </typeConfig> </type> <type type="IRepository" mapTo="Repository"> <typeConfig> <constructor> <param name="dataContext" parameterType="OrdersDataContext"> <dependency /> </param> </constructor> </typeConfig> </type> </types> So now I can get an IRepository like so: IRepository rep = _container.Resolve(); and that all works fine. The problem now is when trying to add the configuration for IOrderStore <type type="IOrderStore" mapTo="OrderStore"> <typeConfig> <constructor> <param name="ordersRepository" parameterType="IRepository"> <dependency /> </param> </constructor> </typeConfig> </type> When I add this, Unity blows up when trying to load the config file. The error message is OrderStore does not have a constructor that takes the parameters (IRepository`1). What I think this is complaining about is because the OrderStore constructor takes a closed IRepository generic type, ie OrderStore(IRepository<Order>) and not OrderStore(IRepository<T>) I don't have any idea how to resolve this.

    Read the article

  • XCode linking error when targeting armv7.

    - by Tom
    I've already spent countless hours puzzling over this, utilizing Google searches and other Stack Overflow questions to no avail. I have an iPhone/iPad universal application, which seems to compile fine when the target is armv6. However, when the device is iPad, I get this warning: warning: building for SDK 'Device - iPhone OS 3.2' requires an armv7 architecture. Oddly enough, the app still runs great on iPad in spite of this warning. However, I do want to do things the "right way" what ever that means in this case. When I switch the target architecture to armv7, I get linking errors: "___restore_vfp_d8_d15_regs", referenced from: *redacted* "___save_vfp_d8_d15_regs", referenced from: *redacted* ld: symbol(s) not found collect2: ld returned 1 exit status The "redacted" portions of the errors are references to the static library to which I'm trying to link. Here's what I've tried from the many suggestions online. Each of these were suggested more than once without any explanation, which leads me to believe nobody quite understands this problem: "Never use the drop down menu in the upper left of the XCode window to choose the target. Instead, set this to Base SDK and then the Base SDK to iPhone OS 3.0 in the target configuration. Set the target device to your preferred target (iPad, iPhone OS 3.2 in my situation.)" This yields the error "Library not found for -lcrt1.3.1.o" "Make sure that GCC isn't linking against the wrong version of the standard library. (You'll have to make sure the LIBRARY_SEARCH_PATH doesn't have the wrong path in it.)" My LIBRARY_SEARCH_PATH is already empty, so this doesn't seem relevant. "Try compiling with GCC 4.0 rather than GCC 4.2." I get a syntax error inside a UIKit header file. The error is "Syntax error before 'AT_NAME' token." The line is "UIKIT_EXTERN @interface UILocalizedIndexedCollation : NSObject." Another project compiles just fine with the same target settings, which is really making me question my sanity. Could I be dealing with a corrupt XCode project? If anyone knows what's actually happening and has a reference or doesn't mind explaining it, I would be so very grateful. Cheers!

    Read the article

  • Doubt about adopting CI (Hudson) into an existing automated Build Process (phing, svn)

    - by maraspin
    OUR CURRENT BUILD PROCESS We're a small team of developers (2 to 4 people depending on project) who currently use Phing to deploy code to a staging environment, before going live. We keep our code in a SVN repo, where the trunk holds current active development and, at certain times, we do make branches that we test and then (if successful), tag and export to the staging env. If everything goes well there too, we finally deploy'em in production servers. Actions are highly automated, but always triggered by human intervention. THE DOUBT We'd now like to introduce Continuous Integration (with Hudson) in the process; unfortunately we have a few doubts about activity syncing, since we're afraid that CI could somewhat interfere with our build process and cause certain problems. Considering that an automated CI cycle has a certain frequency of automatically executed actions, we in fact only see 2 possible cases for "integration", each with its own problems: Case A: each CI cycle produces a new branch with its own name; we do use such a name to manually (through phing as it happens now) export the code from the SVN to the staging env. The problem I see here is that (unless specific countermeasures are taken) the number of branches we have can grow out of control (let's suppose we commit often, so that we have a fresh new build/branch every N minutes). Case B: each CI cycle creates a new branch named 'current', for instance, which is tagged with a unique name only when we manually decide to export it to staging; the current branch, at any case is then deleted, as soon as the next CI cycle starts up. The problem we see here is that a new cycle could kick in while someone is tagging/exporting the 'current' branch to staging thus creating an inconsistent build (but maybe here I'm just too pessimist, since I confess I don't know whether SVN offers some built-in protection against this). With all this being said, I was wondering if anyone with similar experiences could be so kind to give us some hints on the subject, since none of the approaches depicted above looks completely satisfing to us. Is there something important we just completely left off in the overall picture? Thanks for your attention &, in advance, for your help!

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >