Search Results

Search found 1343 results on 54 pages for 'digit grouping'.

Page 49/54 | < Previous Page | 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • How to create Captcha in ASP.NET

    - by Samir R. Bhogayta
    1. Create one page with name "Captcha.aspx" 2. No any control require in this page 3. Go to Captcha.aspx.vb write the below code Protected Sub Page_Load(sender As Object, e As System.EventArgs) Handles Me.Load         'create object of Bitmap Class and set its width and height.         Dim objBMP As Bitmap = New Bitmap(180, 51)         'Create Graphics object and assign bitmap object to graphics' object.         Dim objGraphics As Graphics = Graphics.FromImage(objBMP)         objGraphics.Clear(Color.White)         objGraphics.TextRenderingHint = TextRenderingHint.AntiAlias         Dim objFont As Font = New Font("arial", 30, FontStyle.Bold)         'genetating random 6 digit random number         Dim randomStr As String = GeneratePassword()         'set this random number in session         Session.Add("randomStr", randomStr)         Session.Add("randomStrCountry", randomStr)         objGraphics.DrawString(randomStr, objFont, Brushes.Black, 2, 2)         Response.ContentType = "image/GIF"         objBMP.Save(Response.OutputStream, ImageFormat.Gif)         objFont.Dispose()         objGraphics.Dispose()         objBMP.Dispose()     End Sub     Public Function GeneratePassword() As String         ' Below code describes how to create random numbers.some of the digits and letters         ' are ommited because they look same like "i","o","1","0","I","O".         Dim allowedChars As String = "a,b,c,d,e,f,g,h,j,k,m,n,p,q,r,s,t,u,v,w,x,y,z,"         allowedChars += "A,B,C,D,E,F,G,H,J,K,L,M,N,P,Q,R,S,T,U,V,W,X,Y,Z,"         allowedChars += "2,3,4,5,6,7,8,9"         Dim sep() As Char = {","c}         Dim arr() As String = allowedChars.Split(sep)         Dim passwordString As String = ""         Dim temp As String         Dim rand As Random = New Random()         Dim i As Integer         For i = 0 To 5 - 1 Step i + 1             temp = arr(rand.Next(0, arr.Length))             passwordString += temp         Next         Return passwordString     End Function 4. Use this page in you aspx page like this img alt="" border="0" src="Captcha.aspx" style="cursor: move; height: 60px; width: 200px;" //                                   your textbox to insert code by user.

    Read the article

  • Windows 8 for productivity?

    - by Charles Young
    At long last I’ve started using Windows 8.  I boot from a VHD on which I have installed Office, Visio, Visual Studio, SQL Server, etc.  For a week, now, I’ve been happily writing code and documents and using Visio and PowerPoint.  I am, very much, a ‘productivity’ user rather than a content consumer.   I spend my days flitting between countless windows and browser tabs displayed across dual monitors.  I need to access a lot of different functionality and information in as fluid a fashion as possible. With that in mind, and like so many others, I was worried about Windows 8.  The Metro interface is primarily about content consumption on touch-enabled screens, and not really geared for people like me sitting in front of an 8-core non-touch laptop and an additional Samsung monitor.  I still use a mouse, not my finger.  And I create more than I consume. Clearly, Windows 8 won’t be viable for people like me unless Metro keeps out of my hair when using productivity and development tools.  With this in mind, I had long expected Microsoft to provide some mechanism for switching Metro off.  There was a registry hack in last year’s Developer Preview, but this capability has been removed.   That’s brave.  So, how have things worked out so far? Well, I am really quite surprised.  When I played with the Developer Preview last year, it was clear that Metro was unfinished and didn’t play well enough with the desktop.  Obviously I expected things to improve, but the context switching from desktop to full-screen seemed a heavy burden to place on users.  That sense of abrupt change hasn’t entirely gone away (how could it), but after a few days, I can’t say that I find it burdensome or irritating.   I’ve got used very quickly to ‘gesturing’ with my mouse at the bottom or top right corners of the screen to move between applications, using the Windows key to toggle the Start screen and generally finding my way around.   I am surprised at how effective the Start screen is, given the rather basic grouping features it provides.  Of course, I had to take control of it and sort things the way I want.  If anything, though, the Start screen provides a better navigation and application launcher tool than the old Start menu. What I didn’t expect was the way that Metro enhances the productivity story.  As I write this, I’ve got my desktop open with a maximised Word window.  However, the desktop extends only across about 85% of the width of my screen.  On the left hand side, I have a column that displays the new Metro email client.  This is currently showing me a list of emails for my main work account.  I can flip easily between different accounts and read my email within that same column.  As I work on documents, I want to be able to monitor my inbox with a quick glance. The desktop, of course, has its own snap feature.  I could run the desktop full screen and bring up Outlook and Word side by side.  However, this doesn’t begin to approach the convenience of snapping the Metro email client.  Consider that when I snap a window on the desktop, it initially takes up 50% of the screen.  Outlook doesn’t really know anything about snap, and doesn’t adjust to make effective use of the limited screen estate.  Even at 50% screen width, it is difficult to use, so forget about trying to use it in a Metro fashion. In any case, I am left with the prospect of having to manually adjust everything to view my email effectively alongside Word.  Worse, there is nothing stopping another window from overlapping and obscuring my email.  It becomes a struggle to keep sight of email as it arrives.  Of course, there is always ‘toast’ to notify me when things arrive, but if Outlook is obscured, this just feels intrusive. The beauty of the Metro snap feature is that my email reader now exists outside of my desktop.   The Metro app has been crafted to work well in the fixed width column as well as in full-screen.  It cannot be obscured by overlapping windows.  I still get notifications if I wish.  More importantly, it is clear that careful attention has been given to how things work when moving between applications when ‘snapped’.  If I decide, say to flick over to the Metro newsreader to catch up with current affairs, my desktop, rather than my email client, obligingly makes way for the reader.  With a simple gesture and click, or alternatively by pressing Windows-Tab, my desktop reappears. Another pleasant surprise is the way Windows 8 handles dual monitors.  It’s not just the fact that both screens now display the desktop task bar.  It’s that I can so easily move between Metro and the desktop on either screen.  I can only have Metro on one screen at a time which makes entire sense given the ‘full-screen’ nature of Metro apps.  Using dual monitors feels smoother and easier than previous versions of Windows. Overall then, I’m enjoying the Windows 8 improvements.  Strangely, for all the hype (“Windows reimagined”, etc.), my perception as a ‘productivity’ user is more one of evolution than revolution.  It all feels very familiar, but just better.

    Read the article

  • iPhone 3DES encryption key length issue

    - by Russell Hill
    Hi, I have been banging my head on a wall with this one. I need to code my iPhone application to encrypt a 4 digit "pin" using 3DES in ECB mode for transmission to a webservice which I believe is written in .NET. + (NSData *)TripleDESEncryptWithKey:(NSString *)key dataToEncrypt:(NSData*)encryptData { NSLog(@"kCCKeySize3DES=%d", kCCKeySize3DES); char keyBuffer[kCCKeySize3DES+1]; // room for terminator (unused) bzero( keyBuffer, sizeof(keyBuffer) ); // fill with zeroes (for padding) [key getCString: keyBuffer maxLength: sizeof(keyBuffer) encoding: NSUTF8StringEncoding]; // encrypts in-place, since this is a mutable data object size_t numBytesEncrypted = 0; size_t returnLength = ([encryptData length] + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1); // NSMutableData* returnBuffer = [NSMutableData dataWithLength:returnLength]; char* returnBuffer = malloc(returnLength * sizeof(uint8_t) ); CCCryptorStatus ccStatus = CCCrypt(kCCEncrypt, kCCAlgorithm3DES , kCCOptionECBMode, keyBuffer, kCCKeySize3DES, nil, [encryptData bytes], [encryptData length], returnBuffer, returnLength, &numBytesEncrypted); if (ccStatus == kCCParamError) NSLog(@"PARAM ERROR"); else if (ccStatus == kCCBufferTooSmall) NSLog(@"BUFFER TOO SMALL"); else if (ccStatus == kCCMemoryFailure) NSLog(@"MEMORY FAILURE"); else if (ccStatus == kCCAlignmentError) NSLog(@"ALIGNMENT"); else if (ccStatus == kCCDecodeError) NSLog(@"DECODE ERROR"); else if (ccStatus == kCCUnimplemented) NSLog(@"UNIMPLEMENTED"); if(ccStatus == kCCSuccess) { NSLog(@"TripleDESEncryptWithKey encrypted: %@", [NSData dataWithBytes:returnBuffer length:numBytesEncrypted]); return [NSData dataWithBytes:returnBuffer length:numBytesEncrypted]; } else return nil; } } I do get a value encrypted using the above code, however it does not match the value from the .NET web service. I believe the issue is that the encryption key I have been supplied by the web service developers is 48 characters long. I see that the iPhone SDK constant "kCCKeySize3DES" is 24. So I SUSPECT, but don't know, that the commoncrypto API call is only using the first 24 characters of the supplied key. Is this correct? Is there ANY way I can get this to generate the correct encrypted pin? I have output the data bytes from the encryption PRIOR to base64 encoding it and have attempted to match this against those generated from the .NET code (with the help of a .NET developer who sent the byte array output to me). Neither the non-base64 encoded byte array nor the final base64 encoded strings match.

    Read the article

  • OCR an RSA key fob (security token)

    - by user130582
    I put together a quick WinForm/embedded IE browser control which logs into our company's bank website each morning and scrapes/exports the desired deposit information (the bank is a smallish regional bank). Since we have a few dozen "pseudoaccounts" that draw from the same master account, this actually takes 10-15 minutes to retrieve. Anyway, the only problem is that our business bank account reuires an RSA security token (http://www.rsa.com/node.aspx?id=1156)--if you are not familiar, it is a small device which shows a random 6 digit number every 15(?) seconds, so I have to prompt for this value before starting. This is on top of the website's login based security model, so even if you create a read-only account that can't do anything, you still have to put the RSA number in. We have 5 of these tokens for different people in the company. From our perspective this is nusiance security. I was joking about using a web camera to OCR the digits from the key fob so they didn't have to type it in -- mainly so that the scraping/export would be done before anyone arrives in the morning. Well, they asked if I could really do it. So now I ask you, how hard (how many hours) do you think it would take to OCR these digits reliably from a JPEG image produced by the camera? I already know I can get the JPEG easily. I think you get 3 tries to log in, so it really needs to hit a 99% accuracy rate. I could work on this on my off time, but they don't want me to put more than a few hours into it, so I want to leverage as much existing code as possible. This is a 7-segment display (like an alarm clock) so it's not exactly text that an OCR package would be used to seeing. Also--there is a countdown timer on the side of the display; typically when it is down to 1 bar, you wait until the next number appears and it starts over at 5 bars (like signal strength on your cell phone). So this would need to be OCRd as well but it is not text. Anyway the more I think about it as I type this, the less convinced I am that I can truly get this right, so maybe I should just work on it in my spare time?

    Read the article

  • BounceEase and silverlight 4 BarSeries

    - by Pharabus
    Hi, I am trying to get a bar series to "bounce" when drawing, I assumed the BounceEase TransitionEasingFunction would do this but the lines just fade in, I have posted the xaml and code behind below, does anyone know where I have gone wrong or is it more complex than I though, I am fairly new to silverlight XAML <Grid x:Name="LayoutRoot" Background="White"> <chartingToolkit:Chart x:Name="MyChart"> <chartingToolkit:BarSeries Title="Sales" ItemsSource="{Binding}" IndependentValuePath="Name" DependentValuePath="Value" AnimationSequence="FirstToLast" TransitionDuration="00:00:3"> <chartingToolkit:BarSeries.TransitionEasingFunction> <BounceEase EasingMode="EaseInOut" Bounciness="5" /> </chartingToolkit:BarSeries.TransitionEasingFunction> <chartingToolkit:BarSeries.DataPointStyle> <Style TargetType="Control"> <Setter Property="Background" Value="Red"/> </Style> </chartingToolkit:BarSeries.DataPointStyle> </chartingToolkit:BarSeries> <chartingToolkit:Chart.Axes> <chartingToolkit:LinearAxis Title="Types owned" Orientation="X" Minimum="0" Maximum="300" Interval="10" ShowGridLines="True" FontStyle='Italic'/> </chartingToolkit:Chart.Axes> </chartingToolkit:Chart> </Grid> code behind public class MyClass : DependencyObject { public string Name { get; set; } public Double Value { get { return (Double)GetValue(myValueProperty); } set{SetValue(myValueProperty,value);} } public static readonly DependencyProperty myValueProperty = DependencyProperty.Register("Value", typeof(Double), typeof(MyClass), null); } public MainPage() { InitializeComponent(); //Get the data IList<MyClass> l = this.GetData(); //Get a reference to the SL Chart MyChart.DataContext = l.OrderBy(e => e.Value); //Find the highest number and round it up to the next digit DispatcherTimer myDispatcherTimer = new DispatcherTimer(); myDispatcherTimer.Interval = new TimeSpan(0, 0, 0, 5, 0); // 100 Milliseconds myDispatcherTimer.Tick += new EventHandler(Each_Tick); myDispatcherTimer.Start(); } public void Each_Tick(object o, EventArgs sender) { ((BarSeries)MyChart.Series[0]).DataContext = GetData(); } private IList<MyClass> GetData() { Random random = new Random(); return new List<MyClass>() { new MyClass() {Name="Bob Zero",Value=(random.NextDouble() * 100.0)}, new MyClass() {Name="Bob One",Value=(random.NextDouble() * 100.0)}, new MyClass() {Name="Bob Two",Value=(random.NextDouble() * 100.0)}, new MyClass() {Name="Bob Three",Value=(random.NextDouble() * 100.0)} }; }

    Read the article

  • link_to passing paramater and display problem - tag feature - Ruby on Rails

    - by bgadoci
    I have gotten a great deal of help from KandadaBoggu on my last question and very very thankful for that. As we were getting buried in the comments I wanted to break this part out. I am attempting to create a tag feature on the rails blog I am developing. The relationship is Post has_many :tags and Tag belongs_to :post. Adding and deleting tags to posts are working great. In my /view/posts/index.html.erb I have a section called tags where I am successfully querying the Tags table, grouping them and displaying the count next to the tag_name (as a side note, I mistakenly called the column containing the tag name, 'tag_name' instead of just 'name' as I should have) . In addition the display of these groups are a link that is referencing the index method in the PostsController. That is where the problem is. When you navigate to /posts you get an error because there is no parameter being passed (without clicking the tag group link). I have the .empty? in there so not sure what is going wrong here. Here is the error and code: Error You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.empty? /views/posts/index.html.erb <% @tag_counts.each do |tag_name, tag_count| %> <tr> <td><%= link_to(tag_name, posts_path(:tag_name => tag_name)) %></td> <td>(<%=tag_count%>)</td> </tr> <% end %> PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) @posts=Post.all(:joins => :tags,:conditions=>(params[:tag_name].empty? ? {}: { :tags => { :tag_name => params[:tag_name] }} ) ) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • spoj: runlength

    - by user285825
    For RLM problem of SPOJ: This is the problem: "Run-length encoding of a number replaces a run of digits (that is, a sequence of consecutive equivalent digits) with the number of digits followed by the digit itself. For example, 44455 would become 3425 (three fours, two fives). Note that run-length encoding does not necessarily shorten the length of the data: 11 becomes 21, and 42 becomes 1412. If a number has more than nine consecutive digits of the same type, the encoding is done greedily: each run grabs as many digits as it can, so 111111111111111 is encoded as 9161. Implement an integer arithmetic calculator that takes operands and gives results in run-length format. You should support addition, subtraction, multiplication, and division. You won't have to divide by zero or deal with negative numbers. Input/Output The input will consist of several test cases, one per line. For each test case, compute the run-length mathematics expression and output the original expression and the result, as shown in the examples. The (decimal) representation of all operands and results will fit in signed 64-bit integers." These are my testcases: input: 11 + 11 988726 - 978625 12 * 41 1124 / 1112 13 * 33 15 / 16 19222317121013161815142715181017 + 10 10 + 19222317121013161815142715181017 19222317121013161815142715181017 / 19222317121013161815142715181017 19222317121013161815142715181017 / 11 11 / 19222317121013161815142715181017 19222317121013161815142715181017 / 12 12 / 19222317121013161815142715181017 19222317121013161815142715181017 / 141621161816101118141217131817191014 141621161816101118141217131817191014 / 19222317121013161815142715181017 19222317121013161815142715181017 / 141621161816101118141217131817191013 141621161816101118141217131817191013 / 19222317121013161815142715181017 19222317121013161815142715181017 * 11 11 * 19222317121013161815142715181017 19222317121013161815142715181017 * 10 10 * 19222317121013161815142715181017 19222317121013161815142715181017 - 10 19222317121013161815142715181017 - 19222317121013161815142715181017 19222317121013161815142715181017 - 141621161816101118141217131817191014 19222317121013161815142715181017 - 141621161816101118141217131817191013 141621161816101118141217131817191013 + 141621161816101118141217131817191013 141621161816101118141217131817191013 + 141621161816101118141217131817191014 141621161816101118141217131817191014 + 141621161816101118141217131817191013 141621161816101118141217131817191014 + 10 10 + 141621161816101118141217131817191013 141621161816101118141217131817191013 + 11 11 + 141621161816101118141217131817191013 141621161816101118141217131817191013 * 12 12 * 141621161816101118141217131817191013 141621161816101118141217131817191014 - 141621161816101118141217131817191014 141621161816101118141217131817191013 - 141621161816101118141217131817191013 141621161816101118141217131817191013 - 10 141621161816101118141217131817191014 - 11 141621161816101118141217131817191014 - 141621161816101118141217131817191013 141621161816101118141217131817191014 / 141621161816101118141217131817191014 141621161816101118141217131817191014 / 141621161816101118141217131817191013 141621161816101118141217131817191013 / 141621161816101118141217131817191014 141621161816101118141217131817191013 / 141621161816101118141217131817191013 141621161816101118141217131817191014 * 11 11 * 141621161816101118141217131817191014 141621161816101118141217131817191014 / 11 11 / 141621161816101118141217131817191014 10 + 10 10 + 11 10 + 15 15 + 10 11 + 10 11 + 10 10 - 10 15 - 10 10 * 10 10 * 15 15 * 10 10 / 111213 output: 11 + 11 = 12 988726 - 978625 = 919111 12 * 41 = 42 1124 / 1112 = 1112 13 * 33 = 39 15 / 16 = 10 19222317121013161815142715181017 + 10 = 19222317121013161815142715181017 10 + 19222317121013161815142715181017 = 19222317121013161815142715181017 19222317121013161815142715181017 / 19222317121013161815142715181017 = 11 19222317121013161815142715181017 / 11 = 19222317121013161815142715181017 11 / 19222317121013161815142715181017 = 10 19222317121013161815142715181017 / 12 = 141621161816101118141217131817191013 12 / 19222317121013161815142715181017 = 10 19222317121013161815142715181017 / 141621161816101118141217131817191014 = 11 141621161816101118141217131817191014 / 19222317121013161815142715181017 = 10 19222317121013161815142715181017 / 141621161816101118141217131817191013 = 12 141621161816101118141217131817191013 / 19222317121013161815142715181017 = 10 19222317121013161815142715181017 * 11 = 19222317121013161815142715181017 11 * 19222317121013161815142715181017 = 19222317121013161815142715181017 19222317121013161815142715181017 * 10 = 10 10 * 19222317121013161815142715181017 = 10 19222317121013161815142715181017 - 10 = 19222317121013161815142715181017 19222317121013161815142715181017 - 19222317121013161815142715181017 = 10 19222317121013161815142715181017 - 141621161816101118141217131817191014 = 141621161816101118141217131817191013 19222317121013161815142715181017 - 141621161816101118141217131817191013 = 141621161816101118141217131817191014 141621161816101118141217131817191013 + 141621161816101118141217131817191013 = 19222317121013161815142715181016 141621161816101118141217131817191013 + 141621161816101118141217131817191014 = 19222317121013161815142715181017 141621161816101118141217131817191014 + 141621161816101118141217131817191013 = 19222317121013161815142715181017 141621161816101118141217131817191014 + 10 = 141621161816101118141217131817191014 10 + 141621161816101118141217131817191013 = 141621161816101118141217131817191013 141621161816101118141217131817191013 + 11 = 141621161816101118141217131817191014 11 + 141621161816101118141217131817191013 = 141621161816101118141217131817191014 141621161816101118141217131817191013 * 12 = 19222317121013161815142715181016 12 * 141621161816101118141217131817191013 = 19222317121013161815142715181016 141621161816101118141217131817191014 - 141621161816101118141217131817191014 = 10 141621161816101118141217131817191013 - 141621161816101118141217131817191013 = 10 141621161816101118141217131817191013 - 10 = 141621161816101118141217131817191013 141621161816101118141217131817191014 - 11 = 141621161816101118141217131817191013 141621161816101118141217131817191014 - 141621161816101118141217131817191013 = 11 141621161816101118141217131817191014 / 141621161816101118141217131817191014 = 11 141621161816101118141217131817191014 / 141621161816101118141217131817191013 = 11 141621161816101118141217131817191013 / 141621161816101118141217131817191014 = 10 141621161816101118141217131817191013 / 141621161816101118141217131817191013 = 11 141621161816101118141217131817191014 * 11 = 141621161816101118141217131817191014 11 * 141621161816101118141217131817191014 = 141621161816101118141217131817191014 141621161816101118141217131817191014 / 11 = 141621161816101118141217131817191014 11 / 141621161816101118141217131817191014 = 10 10 + 10 = 10 10 + 11 = 11 10 + 15 = 15 15 + 10 = 15 11 + 10 = 11 11 + 10 = 11 10 - 10 = 10 15 - 10 = 15 10 * 10 = 10 10 * 15 = 10 15 * 10 = 10 10 / 111213 = 10 I am getting consistently wrong answer. I generated the above testcases trying to make them as representative as possible (boundary conditions, etc). I am not sure how to test it further. Some guidline would be really appreciated.

    Read the article

  • Geolocation SQL query not finding exact location

    - by Iridium52
    I have been testing my geolocation query for some time now and I haven't found any issues with it until now. I am trying to search for all cities within a given radius, often times I'm searching for cities surrounding a city using that city's coords, but recently I tried searching around a city and found that the city itself was not returned. I have these cities as an excerpt in my database: city latitude longitude Saint-Mathieu 45.316708 -73.516253 Saint-Édouard 45.233374 -73.516254 Saint-Michel 45.233374 -73.566256 Saint-Rémi 45.266708 -73.616257 But when I run my query around the city of Saint-Rémi, with the following query... SELECT tblcity.city, tblcity.latitude, tblcity.longitude, truncate((degrees(acos( sin(radians(tblcity.latitude)) * sin(radians(45.266708)) + cos(radians(tblcity.latitude)) * cos(radians(45.266708)) * cos(radians(tblcity.longitude - -73.616257) ) ) ) * 69.09*1.6),1) as distance FROM tblcity HAVING distance < 10 ORDER BY distance desc I get these results: city latitude longitude distance Saint-Mathieu 45.316708 -73.516253 9.5 Saint-Édouard 45.233374 -73.516254 8.6 Saint-Michel 45.233374 -73.566256 5.3 The town of Saint-Rémi is missing from the search. So I tried a modified query hoping to get a better result: SELECT tblcity.city, tblcity.latitude, tblcity.longitude, truncate(( 6371 * acos( cos( radians( 45.266708 ) ) * cos( radians( tblcity.latitude ) ) * cos( radians( tblcity.longitude ) - radians( -73.616257 ) ) + sin( radians( 45.266708 ) ) * sin( radians( tblcity.latitude ) ) ) ),1) AS distance FROM tblcity HAVING distance < 10 ORDER BY distance desc But I get the same result... However, if I modify Saint-Rémi's coords slighly by changing the last digit of the lat or long by 1, both queries will return Saint-Rémi. Also, if I center the query on any of the other cities above, the searched city is returned in the results. Can anyone shed some light on what may be causing my queries above to not display the searched city of Saint-Rémi? I have added a sample of the table (with extra fields removed) below. I'm using MySQL 5.0.45, thanks in advance. CREATE TABLE `tblcity` ( `IDCity` int(1) NOT NULL auto_increment, `City` varchar(155) NOT NULL default '', `Latitude` decimal(9,6) NOT NULL default '0.000000', `Longitude` decimal(9,6) NOT NULL default '0.000000', PRIMARY KEY (`IDCity`) ) ENGINE=MyISAM AUTO_INCREMENT=52743 DEFAULT CHARSET=latin1 AUTO_INCREMENT=52743; INSERT INTO `tblcity` (`city`, `latitude`, `longitude`) VALUES ('Saint-Mathieu', 45.316708, -73.516253), ('Saint-Édouard', 45.233374, -73.516254), ('Saint-Michel', 45.233374, -73.566256), ('Saint-Rémi', 45.266708, -73.616257);

    Read the article

  • Search multiple datepicker on same grid

    - by DHF
    I'm using multiple datepicker on same grid and I face the problem to get a proper result. I used 3 datepicker in 1 grid. Only the first datepicker (Order Date)is able to output proper result while the other 2 datepicker (Start Date & End Date) are not able to generate proper result. There is no problem with the query, so could you find out what's going on here? Thanks in advance! php wrapper <?php ob_start(); require_once 'config.php'; // include the jqGrid Class require_once "php/jqGrid.php"; // include the PDO driver class require_once "php/jqGridPdo.php"; // include the datepicker require_once "php/jqCalendar.php"; // Connection to the server $conn = new PDO(DB_DSN,DB_USER,DB_PASSWORD); // Tell the db that we use utf-8 $conn->query("SET NAMES utf8"); // Create the jqGrid instance $grid = new jqGridRender($conn); // Write the SQL Query $grid->SelectCommand = "SELECT c.CompanyID, c.CompanyCode, c.CompanyName, c.Area, o.OrderCode, o.Date, m.maID ,m.System, m.Status, m.StartDate, m.EndDate, m.Type FROM company c, orders o, maintenance_agreement m WHERE c.CompanyID = o.CompanyID AND o.OrderID = m.OrderID "; // Set the table to where you update the data $grid->table = 'maintenance_agreement'; // set the ouput format to json $grid->dataType = 'json'; // Let the grid create the model $grid->setPrimaryKeyId('maID'); // Let the grid create the model $grid->setColModel(); // Set the url from where we obtain the data $grid->setUrl('grouping_ma_details.php'); // Set grid caption using the option caption $grid->setGridOptions(array( "sortable"=>true, "rownumbers"=>true, "caption"=>"Group by Maintenance Agreement", "rowNum"=>20, "height"=>'auto', "width"=>1300, "sortname"=>"maID", "hoverrows"=>true, "rowList"=>array(10,20,50), "footerrow"=>false, "userDataOnFooter"=>false, "grouping"=>true, "groupingView"=>array( "groupField" => array('CompanyName'), "groupColumnShow" => array(true), //show or hide area column "groupText" =>array('<b> Company Name: {0}</b>',), "groupDataSorted" => true, "groupSummary" => array(true) ) )); if(isset($_SESSION['login_admin'])) { $grid->addCol(array( "name"=>"Action", "formatter"=>"actions", "editable"=>false, "sortable"=>false, "resizable"=>false, "fixed"=>true, "width"=>60, "formatoptions"=>array("keys"=>true), "search"=>false ), "first"); } // Change some property of the field(s) $grid->setColProperty("CompanyID", array("label"=>"ID","hidden"=>true,"width"=>30,"editable"=>false,"editoptions"=>array("readonly"=>"readonly"))); $grid->setColProperty("CompanyName", array("label"=>"Company Name","hidden"=>true,"editable"=>false,"width"=>150,"align"=>"center","fixed"=>true)); $grid->setColProperty("CompanyCode", array("label"=>"Company Code","hidden"=>true,"width"=>50,"align"=>"center")); $grid->setColProperty("OrderCode", array("label"=>"Order Code","width"=>110,"editable"=>false,"align"=>"center","fixed"=>true)); $grid->setColProperty("maID", array("hidden"=>true)); $grid->setColProperty("System", array("width"=>150,"fixed"=>true,"align"=>"center")); $grid->setColProperty("Type", array("width"=>280,"fixed"=>true)); $grid->setColProperty("Status", array("width"=>70,"align"=>"center","edittype"=>"select","editoptions"=>array("value"=>"Yes:Yes;No:No"),"fixed"=>true)); $grid->setSelect('System', "SELECT DISTINCT System, System AS System FROM master_ma_system ORDER BY System", false, true, true, array(""=>"All")); $grid->setSelect('Type', "SELECT DISTINCT Type, Type AS Type FROM master_ma_type ORDER BY Type", false, true, true, array(""=>"All")); $grid->setColProperty("StartDate", array("label"=>"Start Date","width"=>120,"align"=>"center","fixed"=>true, "formatter"=>"date", "formatoptions"=>array("srcformat"=>"Y-m-d H:i:s","newformat"=>"d M Y") )); // this is only in this case since the orderdate is set as date time $grid->setUserTime("d M Y"); $grid->setUserDate("d M Y"); $grid->setDatepicker("StartDate",array("buttonOnly"=>false)); $grid->datearray = array('StartDate'); $grid->setColProperty("EndDate", array("label"=>"End Date","width"=>120,"align"=>"center","fixed"=>true, "formatter"=>"date", "formatoptions"=>array("srcformat"=>"Y-m-d H:i:s","newformat"=>"d M Y") )); // this is only in this case since the orderdate is set as date time $grid->setUserTime("d M Y"); $grid->setUserDate("d M Y"); $grid->setDatepicker("EndDate",array("buttonOnly"=>false)); $grid->datearray = array('EndDate'); $grid->setColProperty("Date", array("label"=>"Order Date","width"=>100,"editable"=>false,"align"=>"center","fixed"=>true, "formatter"=>"date", "formatoptions"=>array("srcformat"=>"Y-m-d H:i:s","newformat"=>"d M Y") )); // this is only in this case since the orderdate is set as date time $grid->setUserTime("d M Y"); $grid->setUserDate("d M Y"); $grid->setDatepicker("Date",array("buttonOnly"=>false)); $grid->datearray = array('Date'); // This command is executed after edit $maID = jqGridUtils::GetParam('maID'); $Status = jqGridUtils::GetParam('Status'); $StartDate = jqGridUtils::GetParam('StartDate'); $EndDate = jqGridUtils::GetParam('EndDate'); $Type = jqGridUtils::GetParam('Type'); // This command is executed immediatley after edit occur. $grid->setAfterCrudAction('edit', "UPDATE maintenance_agreement SET m.Status=?, m.StartDate=?, m.EndDate=?, m.Type=? WHERE m.maID=?", array($Status,$StartDate,$EndDate,$Type,$maID)); $selectorder = <<<ORDER function(rowid, selected) { if(rowid != null) { jQuery("#detail").jqGrid('setGridParam',{postData:{CompanyID:rowid}}); jQuery("#detail").trigger("reloadGrid"); // Enable CRUD buttons in navigator when a row is selected jQuery("#add_detail").removeClass("ui-state-disabled"); jQuery("#edit_detail").removeClass("ui-state-disabled"); jQuery("#del_detail").removeClass("ui-state-disabled"); } } ORDER; // We should clear the grid data on second grid on sorting, paging, etc. $cleargrid = <<<CLEAR function(rowid, selected) { // clear the grid data and footer data jQuery("#detail").jqGrid('clearGridData',true); // Disable CRUD buttons in navigator when a row is not selected jQuery("#add_detail").addClass("ui-state-disabled"); jQuery("#edit_detail").addClass("ui-state-disabled"); jQuery("#del_detail").addClass("ui-state-disabled"); } CLEAR; $grid->setGridEvent('onSelectRow', $selectorder); $grid->setGridEvent('onSortCol', $cleargrid); $grid->setGridEvent('onPaging', $cleargrid); $grid->setColProperty("Area", array("width"=>100,"hidden"=>false,"editable"=>false,"fixed"=>true)); $grid->setColProperty("HeadCount", array("label"=>"Head Count","align"=>"center", "width"=>100,"hidden"=>false,"fixed"=>true)); $grid->setSelect('Area', "SELECT DISTINCT AreaName, AreaName AS Area FROM master_area ORDER BY AreaName", false, true, true, array(""=>"All")); $grid->setSelect('CompanyName', "SELECT DISTINCT CompanyName, CompanyName AS CompanyName FROM company ORDER BY CompanyName", false, true, true, array(""=>"All")); $custom = <<<CUSTOM jQuery("#getselected").click(function(){ var selr = jQuery('#grid').jqGrid('getGridParam','selrow'); if(selr) { window.open('http://www.smartouch-cdms.com/order.php?CompanyID='+selr); } else alert("No selected row"); return false; }); CUSTOM; $grid->setJSCode($custom); // Enable toolbar searching $grid->toolbarfilter = true; $grid->setFilterOptions(array("stringResult"=>true,"searchOnEnter"=>false,"defaultSearch"=>"cn")); // Enable navigator $grid->navigator = true; // disable the delete operation programatically for that table $grid->del = false; // we need to write some custom code when we are in delete mode. // get the grid operation parameter to see if we are in delete mode // jqGrid sends the "oper" parameter to identify the needed action $deloper = $_POST['oper']; // det the company id $cid = $_POST['CompanyID']; // if the operation is del and the companyid is set if($deloper == 'del' && isset($cid) ) { // the two tables are linked via CompanyID, so let try to delete the records in both tables try { jqGridDB::beginTransaction($conn); $comp = jqGridDB::prepare($conn, "DELETE FROM company WHERE CompanyID= ?", array($cid)); $cont = jqGridDB::prepare($conn,"DELETE FROM contact WHERE CompanyID = ?", array($cid)); jqGridDB::execute($comp); jqGridDB::execute($cont); jqGridDB::commit($conn); } catch(Exception $e) { jqGridDB::rollBack($conn); echo $e->getMessage(); } } // Enable only deleting if(isset($_SESSION['login_admin'])) { $grid->setNavOptions('navigator', array("pdf"=>true, "excel"=>true,"add"=>false,"edit"=>true,"del"=>false,"view"=>true, "search"=>true)); } else $grid->setNavOptions('navigator', array("pdf"=>true, "excel"=>true,"add"=>false,"edit"=>false,"del"=>false,"view"=>true, "search"=>true)); // In order to enable the more complex search we should set multipleGroup option // Also we need show query roo $grid->setNavOptions('search', array( "multipleGroup"=>false, "showQuery"=>true )); // Set different filename $grid->exportfile = 'Company.xls'; // Close the dialog after editing $grid->setNavOptions('edit',array("closeAfterEdit"=>true,"editCaption"=>"Update Company","bSubmit"=>"Update","dataheight"=>"auto")); $grid->setNavOptions('add',array("closeAfterAdd"=>true,"addCaption"=>"Add New Company","bSubmit"=>"Update","dataheight"=>"auto")); $grid->setNavOptions('view',array("Caption"=>"View Company","dataheight"=>"auto","width"=>"1100")); ob_end_clean(); //solve TCPDF error // Enjoy $grid->renderGrid('#grid','#pager',true, null, null, true,true); $conn = null; ?> javascript code jQuery(document).ready(function ($) { jQuery('#grid').jqGrid({ "width": 1300, "hoverrows": true, "viewrecords": true, "jsonReader": { "repeatitems": false, "subgrid": { "repeatitems": false } }, "xmlReader": { "repeatitems": false, "subgrid": { "repeatitems": false } }, "gridview": true, "url": "session_ma_details.php", "editurl": "session_ma_details.php", "cellurl": "session_ma_details.php", "sortable": true, "rownumbers": true, "caption": "Group by Maintenance Agreement", "rowNum": 20, "height": "auto", "sortname": "maID", "rowList": [10, 20, 50], "footerrow": false, "userDataOnFooter": false, "grouping": true, "groupingView": { "groupField": ["CompanyName"], "groupColumnShow": [false], "groupText": ["<b> Company Name: {0}</b>"], "groupDataSorted": true, "groupSummary": [true] }, "onSelectRow": function (rowid, selected) { if (rowid != null) { jQuery("#detail").jqGrid('setGridParam', { postData: { CompanyID: rowid } }); jQuery("#detail").trigger("reloadGrid"); // Enable CRUD buttons in navigator when a row is selected jQuery("#add_detail").removeClass("ui-state-disabled"); jQuery("#edit_detail").removeClass("ui-state-disabled"); jQuery("#del_detail").removeClass("ui-state-disabled"); } }, "onSortCol": function (rowid, selected) { // clear the grid data and footer data jQuery("#detail").jqGrid('clearGridData', true); // Disable CRUD buttons in navigator when a row is not selected jQuery("#add_detail").addClass("ui-state-disabled"); jQuery("#edit_detail").addClass("ui-state-disabled"); jQuery("#del_detail").addClass("ui-state-disabled"); }, "onPaging": function (rowid, selected) { // clear the grid data and footer data jQuery("#detail").jqGrid('clearGridData', true); // Disable CRUD buttons in navigator when a row is not selected jQuery("#add_detail").addClass("ui-state-disabled"); jQuery("#edit_detail").addClass("ui-state-disabled"); jQuery("#del_detail").addClass("ui-state-disabled"); }, "datatype": "json", "colModel": [ { "name": "Action", "formatter": "actions", "editable": false, "sortable": false, "resizable": false, "fixed": true, "width": 60, "formatoptions": { "keys": true }, "search": false }, { "name": "CompanyID", "index": "CompanyID", "sorttype": "int", "label": "ID", "hidden": true, "width": 30, "editable": false, "editoptions": { "readonly": "readonly" } }, { "name": "CompanyCode", "index": "CompanyCode", "sorttype": "string", "label": "Company Code", "hidden": true, "width": 50, "align": "center", "editable": true }, { "name": "CompanyName", "index": "CompanyName", "sorttype": "string", "label": "Company Name", "hidden": true, "editable": false, "width": 150, "align": "center", "fixed": true, "edittype": "select", "editoptions": { "value": "Aquatex Industries:Aquatex Industries;Benithem Sdn Bhd:Benithem Sdn Bhd;Daily Bakery Sdn Bhd:Daily Bakery Sdn Bhd;Eurocor Asia Sdn Bhd:Eurocor Asia Sdn Bhd;Evergrown Technology:Evergrown Technology;Goldpar Precision:Goldpar Precision;MicroSun Technologies Asia:MicroSun Technologies Asia;NCI Industries Sdn Bhd:NCI Industries Sdn Bhd;PHHP Marketing:PHHP Marketing;Smart Touch Technology:Smart Touch Technology;THOSCO Treatech:THOSCO Treatech;YHL Trading (Johor) Sdn Bhd:YHL Trading (Johor) Sdn Bhd;Zenxin Agri-Organic Food:Zenxin Agri-Organic Food", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Aquatex Industries:Aquatex Industries;Benithem Sdn Bhd:Benithem Sdn Bhd;Daily Bakery Sdn Bhd:Daily Bakery Sdn Bhd;Eurocor Asia Sdn Bhd:Eurocor Asia Sdn Bhd;Evergrown Technology:Evergrown Technology;Goldpar Precision:Goldpar Precision;MicroSun Technologies Asia:MicroSun Technologies Asia;NCI Industries Sdn Bhd:NCI Industries Sdn Bhd;PHHP Marketing:PHHP Marketing;Smart Touch Technology:Smart Touch Technology;THOSCO Treatech:THOSCO Treatech;YHL Trading (Johor) Sdn Bhd:YHL Trading (Johor) Sdn Bhd;Zenxin Agri-Organic Food:Zenxin Agri-Organic Food", "separator": ":", "delimiter": ";" } }, { "name": "Area", "index": "Area", "sorttype": "string", "width": 100, "hidden": true, "editable": false, "fixed": true, "edittype": "select", "editoptions": { "value": "Cemerlang:Cemerlang;Danga Bay:Danga Bay;Kulai:Kulai;Larkin:Larkin;Masai:Masai;Nusa Cemerlang:Nusa Cemerlang;Nusajaya:Nusajaya;Pasir Gudang:Pasir Gudang;Pekan Nenas:Pekan Nenas;Permas Jaya:Permas Jaya;Pontian:Pontian;Pulai:Pulai;Senai:Senai;Skudai:Skudai;Taman Gaya:Taman Gaya;Taman Johor Jaya:Taman Johor Jaya;Taman Molek:Taman Molek;Taman Pelangi:Taman Pelangi;Taman Sentosa:Taman Sentosa;Tebrau 4:Tebrau 4;Ulu Tiram:Ulu Tiram", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Cemerlang:Cemerlang;Danga Bay:Danga Bay;Kulai:Kulai;Larkin:Larkin;Masai:Masai;Nusa Cemerlang:Nusa Cemerlang;Nusajaya:Nusajaya;Pasir Gudang:Pasir Gudang;Pekan Nenas:Pekan Nenas;Permas Jaya:Permas Jaya;Pontian:Pontian;Pulai:Pulai;Senai:Senai;Skudai:Skudai;Taman Gaya:Taman Gaya;Taman Johor Jaya:Taman Johor Jaya;Taman Molek:Taman Molek;Taman Pelangi:Taman Pelangi;Taman Sentosa:Taman Sentosa;Tebrau 4:Tebrau 4;Ulu Tiram:Ulu Tiram", "separator": ":", "delimiter": ";" } }, { "name": "OrderCode", "index": "OrderCode", "sorttype": "string", "label": "Order No.", "width": 110, "editable": false, "align": "center", "fixed": true }, { "name": "Date", "index": "Date", "sorttype": "date", "label": "Order Date", "width": 100, "editable": false, "align": "center", "fixed": true, "formatter": "date", "formatoptions": { "srcformat": "Y-m-d H:i:s", "newformat": "d M Y" }, "editoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "searchoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } } }, { "name": "maID", "index": "maID", "sorttype": "int", "key": true, "hidden": true, "editable": true }, { "name": "System", "index": "System", "sorttype": "string", "width": 150, "fixed": true, "align": "center", "edittype": "select", "editoptions": { "value": "Payroll:Payroll;TMS:TMS;TMS & Payroll:TMS & Payroll", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Payroll:Payroll;TMS:TMS;TMS & Payroll:TMS & Payroll", "separator": ":", "delimiter": ";" }, "editable": true }, { "name": "Status", "index": "Status", "sorttype": "string", "width": 70, "align": "center", "edittype": "select", "editoptions": { "value": "Yes:Yes;No:No" }, "fixed": true, "editable": true }, { "name": "StartDate", "index": "StartDate", "sorttype": "date", "label": "Start Date", "width": 120, "align": "center", "fixed": true, "formatter": "date", "formatoptions": { "srcformat": "Y-m-d H:i:s", "newformat": "d M Y" }, "editoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "searchoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "editable": true }, { "name": "EndDate", "index": "EndDate", "sorttype": "date", "label": "End Date", "width": 120, "align": "center", "fixed": true, "formatter": "date", "formatoptions": { "srcformat": "Y-m-d H:i:s", "newformat": "d M Y" }, "editoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "searchoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "editable": true }, { "name": "Type", "index": "Type", "sorttype": "string", "width": 530, "fixed": true, "edittype": "select", "editoptions": { "value": "Comprehensive MA:Comprehensive MA;FOC service, 20% spare part discount:FOC service, 20% spare part discount;Standard Package, FOC 1 time service, 20% spare part discount:Standard Package, FOC 1 time service, 20% spare part discount;Standard Package, FOC 2 time service, 20% spare part discount:Standard Package, FOC 2 time service, 20% spare part discount;Standard Package, FOC 3 time service, 20% spare part discount:Standard Package, FOC 3 time service, 20% spare part discount;Standard Package, FOC 4 time service, 20% spare part discount:Standard Package, FOC 4 time service, 20% spare part discount;Standard Package, FOC 6 time service, 20% spare part discount:Standard Package, FOC 6 time service, 20% spare part discount;Standard Package, no free:Standard Package, no free", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Comprehensive MA:Comprehensive MA;FOC service, 20% spare part discount:FOC service, 20% spare part discount;Standard Package, FOC 1 time service, 20% spare part discount:Standard Package, FOC 1 time service, 20% spare part discount;Standard Package, FOC 2 time service, 20% spare part discount:Standard Package, FOC 2 time service, 20% spare part discount;Standard Package, FOC 3 time service, 20% spare part discount:Standard Package, FOC 3 time service, 20% spare part discount;Standard Package, FOC 4 time service, 20% spare part discount:Standard Package, FOC 4 time service, 20% spare part discount;Standard Package, FOC 6 time service, 20% spare part discount:Standard Package, FOC 6 time service, 20% spare part discount;Standard Package, no free:Standard Package, no free", "separator": ":", "delimiter": ";" }, "editable": true } ], "postData": { "oper": "grid" }, "prmNames": { "page": "page", "rows": "rows", "sort": "sidx", "order": "sord", "search": "_search", "nd": "nd", "id": "maID", "filter": "filters", "searchField": "searchField", "searchOper": "searchOper", "searchString": "searchString", "oper": "oper", "query": "grid", "addoper": "add", "editoper": "edit", "deloper": "del", "excel": "excel", "subgrid": "subgrid", "totalrows": "totalrows", "autocomplete": "autocmpl" }, "loadError": function(xhr, status, err) { try { jQuery.jgrid.info_dialog(jQuery.jgrid.errors.errcap, '<div class="ui-state-error">' + xhr.responseText + '</div>', jQuery.jgrid.edit.bClose, { buttonalign: 'right' } ); } catch(e) { alert(xhr.responseText); } }, "pager": "#pager" }); jQuery('#grid').jqGrid('navGrid', '#pager', { "edit": true, "add": false, "del": false, "search": true, "refresh": true, "view": true, "excel": true, "pdf": true, "csv": false, "columns": false }, { "drag": true, "resize": true, "closeOnEscape": true, "dataheight": "auto", "errorTextFormat": function (r) { return r.responseText; }, "closeAfterEdit": true, "editCaption": "Update Company", "bSubmit": "Update" }, { "drag": true, "resize": true, "closeOnEscape": true, "dataheight": "auto", "errorTextFormat": function (r) { return r.responseText; }, "closeAfterAdd": true, "addCaption": "Add New Company", "bSubmit": "Update" }, { "errorTextFormat": function (r) { return r.responseText; } }, { "drag": true, "closeAfterSearch": true, "multipleSearch": true }, { "drag": true, "resize": true, "closeOnEscape": true, "dataheight": "auto", "Caption": "View Company", "width": "1100" } ); jQuery('#grid').jqGrid('navButtonAdd', '#pager', { id: 'pager_excel', caption: '', title: 'Export To Excel', onClickButton: function (e) { try { jQuery("#grid").jqGrid('excelExport', { tag: 'excel', url: 'session_ma_details.php' }); } catch (e) { window.location = 'session_ma_details.php?oper=excel'; } }, buttonicon: 'ui-icon-newwin' }); jQuery('#grid').jqGrid('navButtonAdd', '#pager', { id: 'pager_pdf', caption: '', title: 'Export To Pdf', onClickButton: function (e) { try { jQuery("#grid").jqGrid('excelExport', { tag: 'pdf', url: 'session_ma_details.php' }); } catch (e) { window.location = 'session_ma_details.php?oper=pdf'; } }, buttonicon: 'ui-icon-print' }); jQuery('#grid').jqGrid('filterToolbar', { "stringResult": true, "searchOnEnter": false, "defaultSearch": "cn" }); jQuery("#getselected").click(function () { var selr = jQuery('#grid').jqGrid('getGridParam', 'selrow'); if (selr) { window.open('http://www.smartouch-cdms.com/order.php?CompanyID=' + selr); } else alert("No selected row"); return false; }); });

    Read the article

  • How to create a compiler in vb.net

    - by Cyclone
    Before answering this question, understand that I am not asking how to create my own programming language, I am asking how, using vb.net code, I can create a compiler for a language like vb.net itself. Essentially, the user inputs code, they get a .exe. By NO MEANS do I want to write my own language, as it seems other compiler related questions on here have asked. I also do not want to use the vb.net compiler itself, nor do I wish to duplicate the IDE. The exact purpose of what I wish to do is rather hard to explain, but all I need is a nudge in the right direction for writing a compiler (from scratch if possible) which can simply take input and create a .exe. I have opened .exe files as plain text before (my own programs) to see if I could derive some meaning from what I assumed would be human readable text, yet I was obviously sorely disappointed to see the random ascii, though it is understandable why this is all I found. I know that a .exe file is simply lines of code, being parsed by the computer it is on, but my question here really boils down to this: What code makes up a .exe? How could I go about making one in a plain text editor if I wanted to? (No, I do not want to do that, but if I understand the process my goals will be much easier to achieve.) What makes an executable file an executable file? Where does the logic of the code fit in? This is intended to be a programming question as opposed to a computer question, which is why I did not post it on SuperUser. I know plenty of information about the System.IO namespace, so I know how to create a file and write to it, I simply do not know what exactly I would be placing inside this file to get it to work as an executable file. I am sorry if this question is "confusing", "dumb", or "obvious", but I have not been able to find any information regarding the actual contents of an executable file anywhere. One of my google searches Something that looked promising EDIT: The second link here, while it looked good, was an utter fail. I am not going to waste hours of my time hitting keys and recording the results. "Use the "Alt" and the 3-digit combinations to create symbols that do not appear on the keyboard but that you need in the program." (step 4) How the heck do I know what symbols I need??? Thank you very much for your help, and my apologies if this question is a nooby or "bad" one. To sum this up simply: I want to create a program in vb.net that can compile code in a particular language to a single executable file. What methods exist that can allow me to do this, and if there are none, how can I go about writing my own from scratch?

    Read the article

  • help with making a password checker in java

    - by Cheesegraterr
    Hello, I am trying to make a program in Java that checks for three specific inputs. It has to be 1. At least 7 characters. 2. Contain both upper and lower case alphabetic characters. 3. Contain at least 1 digit. So far I have been able to make it check if there is 7 characters, but I am having trouble with the last two. What should I put in my loop as an if statement to check for digits and make it upper and lower case. Any help would be greatly appreciated. Here is what I have so far. import java.awt.*; import java.io.*; import java.util.StringTokenizer; public class passCheck { private static String getStrSys () { String myInput = null; //Store the String that is read in from the command line BufferedReader mySystem; //Buffer to store the input mySystem = new BufferedReader (new InputStreamReader (System.in)); //creates a connection to system input try { myInput = mySystem.readLine (); //reads in data from the console myInput = myInput.trim (); } catch (IOException e) //check { System.out.println ("IOException: " + e); return ""; } return myInput; //return the integer to the main program } //**************************************** //main instructions go here //**************************************** static public void main (String[] args) { String pass; //the words the user inputs String temp = ""; //holds temp info int stringLength; //length of string boolean goodPass = false; System.out.print ("Please enter a password: "); //ask for words pass = getStrSys (); //get words from system temp = pass.toLowerCase (); stringLength = pass.length (); //find length of eveyrthing while (goodPass == false) { if (stringLength < 7) { System.out.println ("Your password must consist of at least 7 characters"); System.out.print ("Please enter a password: "); //ask for words pass = getStrSys (); stringLength = pass.length (); goodPass = false; } else if (something to check for digits) { } }

    Read the article

  • Why does a subTable break a4j:commandLink's reRender?

    - by Tom
    Here is a minimal rich:dataTable example with an a4j:commandLink inside. When clicked, it sends an AJAX request to my bean and reRenders the dataTable. <rich:dataTable id="dataTable" value="#{carManager.all}" var="item"> <rich:column> <f:facet name="header">name</f:facet> <h:outputText value="#{item.name}" /> </rich:column> <rich:column> <f:facet name="header">action</f:facet> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:dataTable> The exmaple obove works fine so far. But when I add a rich:subTable (grouping the cars by garage for example) to the table, reRendering fails... <rich:dataTable id="dataTable" value="#{garageManager.all}" var="garage"> <f:facet name="header"> <rich:columnGroup> <rich:column>name</rich:column> <rich:column>action</rich:column> </rich:columnGroup> </f:facet> <rich:column colspan="2"> <h:outputText value="#{garage.name}" /> </rich:column> <rich:subTable value="#{garage.cars}" var="car"> <rich:column><h:ouputText value="#{car.name}" /></rich:column> <rich:column> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:column> </rich:dataTable> Now the rich:dataTable is not rerendered but the item gets deleted since the item does not show up after a manual page refresh. Why does subTable break support for reRender-ing here? Tanks Tom

    Read the article

  • Parse int to string with stringstream

    - by SoulBeaver
    Well! I feel really stupid for this question, and I wholly don't mind if I get downvoted for this, but I guess I wouldn't be posting this if I had not at least made an earnest attempt at looking for the solution. I'm currently working on Euler Problem 4, finding the largest palindromic number of two three-digit numbers [100..999]. As you might guess, I'm at the part where I have to work with the integer I made. I looked up a few sites and saw a few standards for converting an Int to a String, one of which included stringstream. So my code looked like this: // tempTotal is my int value I want converted. void toString( int tempTotal, string &str ) { ostringstream ss; // C++ Standard compliant method. ss << tempTotal; str = ss.str(); // Overwrite referenced value of given string. } and the function calling it was: else { toString( tempTotal, store ); cout << loop1 << " x " << loop2 << "= " << store << endl; } So far, so good. I can't really see an error in what I've written, but the output gives me the address to something. It stays constant, so I don't really know what the program is doing there. Secondly, I tried .ToString(), string.valueOf( tempTotal ), (string)tempTotal, or simply store = temptotal. All refused to work. When I simply tried doing an implicit cast with store = tempTotal, it didn't give me a value at all. When I tried checking output it literally printed nothing. I don't know if anything was copied into my string that simply isn't a printable character, or if the compiler just ignored it. I really don't know. So even though I feel this is a really, really lame question, I just have to ask: How do I convert that stupid integer to a string with the stringstream? The other tries are more or less irrelevant for me, I just really want to know why my stringstream solution isn't working.

    Read the article

  • Reading Source Code Aloud

    - by Jon Purdy
    After seeing this question, I got to thinking about the various challenges that blind programmers face, and how some of them are applicable even to sighted programmers. Particularly, the problem of reading source code aloud gives me pause. I have been programming for most of my life, and I frequently tutor fellow students in programming, most often in C++ or Java. It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal. On the one hand, an idiomatic translation is only useful to a programmer who can de-translate back into the relevant programming code—which is not usually the case when tutoring a student. In turn, education (or simply getting someone up to speed on a project) is the most common situation in which source is read aloud, and there is a very small margin for error. On the other hand, a literal specification is aggravatingly slow. It takes far far longer to say "pound, include, left angle bracket, iostream, right angle bracket, newline" than it does to simply type #include <iostream>. Indeed, most experienced C++ programmers would read this merely as "include iostream", but again, inexperienced programmers abound and literal specifications are sometimes necessary. So I've had an idea for a potential solution to this problem. In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it. So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised. Speakers of any language would be able to verbally convey C++ code, and due to the regularity and fixity of the language, speech-to-text software could be optimised to accept C++ speech with a high degree of accuracy. So my question is twofold: first, is my solution feasible; and second, does anyone else have other potential solutions? I intend to take suggestions from here and use them to produce a formal paper with an example implementation of my solution.

    Read the article

  • SQLiteException and SQLite error near "(": syntax error with Subsonic ActiveRecord

    - by nvuono
    I ran into an interesting error with the following LiNQ query using LiNQPad and when using Subsonic 3.0.x w/ActiveRecord within my project and wanted to share the error and resolution for anyone else who runs into it. The linq statement below is meant to group entries in the tblSystemsValues collection into their appropriate system and then extract the system with the highest ID. from ksf in KeySafetyFunction where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystems on ksf.ID equals sys.KeySafetyFunction join xval in (from t in tblSystemsValues group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g=>g.ID), MaxText = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).Checked }) on sys.ID equals xval.sysId select new {KSFDesc=ksf.Description, sys.Description, xval.MaxText, xval.MaxChecked} On its own, the subquery for grouping into groupedT works perfectly and the query to match up KeySafetyFunctions with their System in tblSystems also works perfectly on its own. However, when trying to run the completed query in linqpad or within my project I kept running into a SQLiteException SQLite Error Near "(" First I tried splitting the queries up within my project because I knew that I could just run a foreach loop over the results if necessary. However, I continued to receive the same exception! I eventually separated the query into three separate parts before I realized that it was the lazy execution of the queries that was killing me. It then became clear that adding the .ToList() specifier after the myProtectedSystem query below was the key to avoiding the lazy execution after combining and optimizing the query and being able to get my results despite the problems I encountered with the SQLite driver. // determine the max Text/Checked values for each system in tblSystemsValue var myProtectedValue = from t in tblSystemsValue.All() group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g => g.ID), MaxText = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).Checked}; // get the system description information and filter by Unit/Condition ID var myProtectedSystem = (from ksf in KeySafetyFunction.All() where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystem.All() on ksf.ID equals sys.KeySafetyFunction select new {KSFDesc = ksf.Description, sys.Description, sys.ID}).ToList(); // finally join everything together AFTER forcing execution with .ToList() var joined = from protectedSys in myProtectedSystem join protectedVal in myProtectedValue on protectedSys.ID equals protectedVal.sysId select new {protectedSys.KSFDesc, protectedSys.Description, protectedVal.MaxChecked, protectedVal.MaxText}; // print the gratifying debug results foreach(var protectedItem in joined) { System.Diagnostics.Debug.WriteLine(protectedItem.Description + ", " + protectedItem.KSFDesc + ", " + protectedItem.MaxText + ", " + protectedItem.MaxChecked); }

    Read the article

  • jQuery Validation plugin: checkbox groups and error message issues

    - by boomturn
    I've put together a form using the jQuery Validation plugin, and all inputs are fine with working validation and error messages – except for checkboxes. I have two checkbox problems. The first is that the Validation plugin API doesn't seem to handle checkboxes in grouped contexts (I'm using fieldsets for grouping). Found several approaches to the issue here, including reference to a post by Rebecca Murphey for a more general case using a custom method and class. Adapting that to this situation: jQuery.validator.addMethod('required_group', function(val, el) { var fieldParent = $(el).closest('fieldset'); return fieldParent.find('.required_group:checked').length; }); jQuery.validator.addClassRules('required_group', { 'required_group': true }); jQuery.validator.messages.required_group = 'Please check at least one box.'; This sort of works, but produces error messages on every checkbox, and only removes them as each box is clicked. This is not an acceptable situation for the user, who can only get rid of them by clicking false positives. Ideally, I guess what's needed is something to prevent or eliminate extra messages before they are displayed and use errorPlacement to display a single error message in the parent fieldset, that would then be removed with a click on any checkbox. Less ideally, maybe they would all display but an event handler could turn off the full set of redundant messages with a click, which is what this approach offered by tvanfosson appears to do. (Another customized approach here, but I couldn't get it to work.) I guess I should also note this form requires the checkboxes to have different names. My second problem is that one of the fieldsets with checkboxes in the form also contains a nested fieldset of checkboxes under one of the outer checkboxes. So in addition to the first-level one-box-checked requirement, if the particular checkbox containing the second-level checkboxes is checked, then at least one of the second-level boxes must be checked. Not sure about the right approach; I'm guessing what needs to happen (following the above scheme) is that the trigger checkbox would use toggleClass to add/remove 'required_group' class to all the checkboxes in the subfield, which would then (hopefully) behave the same as the parent field: $("#triggerCheckbox").click(function () { $(this).find(":checkbox").toggleClass("required_group"); }); Any suggestions or ideas welcome. I'm well beyond my limited jQuery skills on this one and would be happy to hear that I missed simple, elegant and/or obvious ways to do this!

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • SQL Select - adding field to Select is changing the results

    - by nycdan
    I'm stumped by this SQL problem that I suspect will be easy pickings for someone out there. I have a table that contains rows representing several daily lists of ranked items. The relevent fields are as follows: ID, ListID, ItemID, ItemName, ItemRank, Date. I have a query that returns the items that were on a list yesterday but not today (Items Off List) as follows: Select ItemID, ListID, ItemName, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, COUNT(ItemName) Basically I'm looking for records where the max date is yesterday. It works fine and shows the number of days each item was previously on the list (although not necessarily consecutively, but that's fine for now). The problem is when I try to add ranking to see what yesterday's rank was. I tried the following: Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName, ranking Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, ranking, COUNT(ItemName) This returns a great deal more records than the previous query so something isn't right with it. I want the same number of records, but with the ranking included. I can get the rank by doing a self-join with a subquery and getting records where the ItemID occurs yesterday but not today - but then I don't know how to get the Count any more. Appreciation in advance for any help with this. ======== SOLVED ============== Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list from Table T Where date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 and T.ItemID Not In (select T.ItemID from Table T join Table T2 on T.ItemID = T2.ItemID and T.ListID = T2.ListID where T.date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and T2.date = convert (varchar(10),getdate(),101) and T.ListID = 1) Group by ItemID, ListID, ItemName, ranking Basically, what I did was create a subquery that finds all items that appear in both days, and finds items that appeared yesterday but are not in the set of items that appeared both days. Then I was able to do the aggregate function and grouping correctly. I would NOT be surprised if this is more convoluted than necessary but I understand it and can modify it as needed and performance doesn't seem to be an issue. Thanks everyone for the assist.

    Read the article

  • Can parser combination be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and shift oop f op (P(g, xs)) = let h, xs = loop (op, g, xs) loop (oop, f h, xs) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) | oop, f, ('^' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) | '^' -> pown loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '-'::P(f, xs) -> let f, xs = loop ('~', f, xs) P(-f, xs) | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • LinqToXML why does my object go out of scope? Also should I be doing a group by?

    - by Kettenbach
    Hello All, I have an IEnumerable<someClass>. I need to transform it into XML. There is a property called 'ZoneId'. I need to write some XML based on this property, then I need some decendent elements that provide data relevant to the ZoneId. I know I need some type of grouping. Here's what I have attempted thus far without much success. **inventory is an IEnumerable<someClass>. So I query inventory for unique zones. This works ok. var zones = inventory.Select(c => new { ZoneID = c.ZoneId , ZoneName = c.ZoneName , Direction = c.Direction }).Distinct(); No I want to create xml based on zones and place. ***place is a property of 'someClass'. var xml = new XElement("MSG_StationInventoryList" , new XElement("StationInventory" , zones.Select(station => new XElement("station-id", station.ZoneID) , new XElement("station-name", station.ZoneName)))); This does not compile as "station" is out of scope when I try to add the "station-name" element. However is I remove the paren after 'ZoneId', station is in scope and I retreive the station-name. Only problem is the element is then a decendant of 'station-id'. This is not the desired output. They should be siblings. What am I doing wrong? Lastly after the "station-name" element, I will need another complex type which is a collection. Call it "places'. It will have child elements called "place". its data will come from the IEnumerable and I will only want "places" that have the "ZoneId" for the current zone. Can anyone point me in the right direction? Is it a mistake select distinct zones from the original IEnumerable? This object has all the data I need within it. I just need to make it heirarchical. Thanks for any pointers all. Cheers, Chris in San Diego

    Read the article

  • Complex SQL query with group by and two rows in one

    - by Ricket
    Okay, I need help. I'm usually pretty good at SQL queries but this one baffles me. By the way, this is not a homework assignment, it's a real situation in an Access database and I've written the requirements below myself. Here is my table layout. It's in Access 2007 if that matters; I'm writing the query using SQL. Id (primary key) PersonID (foreign key) EventDate NumberOfCredits SuperCredits (boolean) There are events that people go to. They can earn normal credits, or super credits, or both at one event. The SuperCredits column is true if the row represents a number of super credits earned at the event, or false if it represents normal credits. So for example, if there is an event which person 174 attends, and they earn 3 normal credits and 1 super credit at the event, the following two rows would be added to the table: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 3 false 2 174 1/1/2010 1 true It is also possible that the person could have done two separate things at the event, so there might be more than two columns for one event, and it might look like this: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 1 false 2 174 1/1/2010 2 false 3 174 1/1/2010 1 true Now we want to print out a report. Here will be the columns of the report: PersonID LastEventDate NumberOfNormalCredits NumberOfSuperCredits The report will have one row per person. The row will show the latest event that the person attended, and the normal and super credits that the person earned at that event. What I am asking of you is to write, or help me write, the SQL query to SELECT the data and GROUP BY and SUM() and whatnot. Or, let me know if this is for some reason not possible, and how to organize my data to make it possible. This is extremely confusing and I understand if you do not take the time to puzzle through it. I've tried to simplify it as much as possible, but definitely ask any questions if you give it a shot and need clarification. I'll be trying to figure it out but I'm having a real hard time with it, this is grouping beyond my experience...

    Read the article

  • Using MySQL to generate daily sales reports with filled gaps, grouped by currency

    - by Shane O'Grady
    I'm trying to create what I think is a relatively basic report for an online store, using MySQL 5.1.45 The store can receive payment in multiple currencies. I have created some sample tables with data and am trying to generate a straightforward tabular result set grouped by date and currency so that I can graph these figures. I want to see each currency that is available per date, with a 0 in the result if there were no sales in that currency for that day. If I can get that to work I want to do the same but also grouped by product id. In the sample data I have provided there are only 3 currencies and 2 product ids, but in practice there can be any number of each. I can correctly group by date, but then when I add a grouping by currency my query does not return what I want. I based my work off this article. My reporting query, grouped only by date: SELECT calendar.datefield AS date, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date Now grouped by date and currency: SELECT calendar.datefield AS date, orders.currency_id, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date, orders.currency_id The results I am getting (grouped by date and currency): +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | NULL | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The results I want: +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | 3 | 0.00 | | 2009-08-16 | 45 | 0.00 | | 2009-08-16 | 49 | 0.00 | | 2009-08-17 | 3 | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 45 | 0.00 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The schema and data I am using in my tests: CREATE TABLE orders ( id INT PRIMARY KEY AUTO_INCREMENT, order_date DATETIME, order_id INT, product_id INT, currency_id INT, order_value DECIMAL(9,2), customer_id INT ); INSERT INTO orders (order_date, order_id, product_id, currency_id, order_value, customer_id) VALUES ('2009-08-15 10:20:20', '123', '1', '45', '12.50', '322'), ('2009-08-15 12:30:20', '124', '1', '49', '122.60', '400'), ('2009-08-15 13:41:20', '125', '1', '3', '40.97', '324'), ('2009-08-15 10:20:20', '126', '2', '45', '12.50', '345'), ('2009-08-15 13:41:20', '131', '2', '3', '40.97', '756'), ('2009-08-17 10:20:20', '3234', '1', '45', '12.50', '1322'), ('2009-08-17 10:20:20', '4642', '2', '45', '12.50', '1345'), ('2009-08-17 12:30:20', '23', '2', '49', '122.60', '3142'), ('2009-08-18 12:30:20', '2131', '1', '49', '122.60', '4700'), ('2009-08-18 13:41:20', '4568', '1', '3', '40.97', '3274'), ('2009-08-18 12:30:20', '956', '2', '49', '122.60', '3542'), ('2009-08-18 13:41:20', '443', '2', '3', '40.97', '7556'); CREATE TABLE currency ( id INT PRIMARY KEY, name VARCHAR(255) ); INSERT INTO currency (id, name) VALUES (3, 'Euro'), (45, 'US Dollar'), (49, 'CA Dollar'); CREATE TABLE calendar (datefield DATE); DELIMITER | CREATE PROCEDURE fill_calendar(start_date DATE, end_date DATE) BEGIN DECLARE crt_date DATE; SET crt_date=start_date; WHILE crt_date < end_date DO INSERT INTO calendar VALUES(crt_date); SET crt_date = ADDDATE(crt_date, INTERVAL 1 DAY); END WHILE; END | DELIMITER ; CALL fill_calendar('2008-01-01', '2011-12-31');

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54  | Next Page >