Search Results

Search found 757 results on 31 pages for 'grouping'.

Page 27/31 | < Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • What Keeps You from Changing Your Public IP Address and Wreaking Havoc on the Internet?

    - by Jason Fitzpatrick
    What exactly is preventing you (or anyone else) from changing their IP address and causing all sorts of headaches for ISPs and other Internet users? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Whitemage is curious about what’s preventing him from wantonly changing his IP address and causing trouble: An interesting question was asked of me and I did not know what to answer. So I’ll ask here. Let’s say I subscribed to an ISP and I’m using cable internet access. The ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let’s say, 60.61.62.75, and messing with another consumer’s internet access? For the sake of this argument, let’s say that this other IP address is also owned by the same ISP. Also, let’s assume that it’s possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gateway, a network address and a broadcast address. So that’s 3 addresses the ISP “loses” to you. That seems very wasteful for dynamically assigned IP addresses, which the majority of customers are. Could they simply be using static arps? ACLs? Other simple mechanisms? Two things to investigate here, why can’t we just go around changing our addresses, and is the assignment process as wasteful as it seems? The Answer SuperUser contributor Moses offers some insight: Cable modems aren’t like your home router (ie. they don’t have a web interface with simple point-and-click buttons that any kid can “hack” into). Cable modems are “looked up” and located by their MAC address by the ISP, and are typically accessed by technicians using proprietary software that only they have access to, that only runs on their servers, and therefore can’t really be stolen. Cable modems also authenticate and cross-check settings with the ISPs servers. The server has to tell the modem whether it’s settings (and location on the cable network) are valid, and simply sets it to what the ISP has it set it for (bandwidth, DHCP allocations, etc). For instance, when you tell your ISP “I would like a static IP, please.”, they allocate one to the modem through their servers, and the modem allows you to use that IP. Same with bandwidth changes, for instance. To do what you are suggesting, you would likely have to break into the servers at the ISP and change what it has set up for your modem. Could they simply be using static arps? ACLs? Other simple mechanisms? Every ISP is different, both in practice and how close they are with the larger network that is providing service to them. Depending on those factors, they could be using a combination of ACL and static ARP. It also depends on the technology in the cable network itself. The ISP I worked for used some form of ACL, but that knowledge was a little beyond my paygrade. I only got to work with the technician’s interface and do routine maintenance and service changes. What keeps me from changing this IP address to, let’s say, 60.61.62.75 and mess with another consumer’s internet access? Given the above, what keeps you from changing your IP to one that your ISP hasn’t specifically given to you is a server that is instructing your modem what it can and can’t do. Even if you somehow broke into the modem, if 60.61.62.75 is already allocated to another customer, then the server will simply tell your modem that it can’t have it. David Schwartz offers some additional insight with a link to a white paper for the really curious: Most modern ISPs (last 13 years or so) will not accept traffic from a customer connection with a source IP address they would not route to that customer were it the destination IP address. This is called “reverse path forwarding”. See BCP 38. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Windows 8 for productivity?

    - by Charles Young
    At long last I’ve started using Windows 8.  I boot from a VHD on which I have installed Office, Visio, Visual Studio, SQL Server, etc.  For a week, now, I’ve been happily writing code and documents and using Visio and PowerPoint.  I am, very much, a ‘productivity’ user rather than a content consumer.   I spend my days flitting between countless windows and browser tabs displayed across dual monitors.  I need to access a lot of different functionality and information in as fluid a fashion as possible. With that in mind, and like so many others, I was worried about Windows 8.  The Metro interface is primarily about content consumption on touch-enabled screens, and not really geared for people like me sitting in front of an 8-core non-touch laptop and an additional Samsung monitor.  I still use a mouse, not my finger.  And I create more than I consume. Clearly, Windows 8 won’t be viable for people like me unless Metro keeps out of my hair when using productivity and development tools.  With this in mind, I had long expected Microsoft to provide some mechanism for switching Metro off.  There was a registry hack in last year’s Developer Preview, but this capability has been removed.   That’s brave.  So, how have things worked out so far? Well, I am really quite surprised.  When I played with the Developer Preview last year, it was clear that Metro was unfinished and didn’t play well enough with the desktop.  Obviously I expected things to improve, but the context switching from desktop to full-screen seemed a heavy burden to place on users.  That sense of abrupt change hasn’t entirely gone away (how could it), but after a few days, I can’t say that I find it burdensome or irritating.   I’ve got used very quickly to ‘gesturing’ with my mouse at the bottom or top right corners of the screen to move between applications, using the Windows key to toggle the Start screen and generally finding my way around.   I am surprised at how effective the Start screen is, given the rather basic grouping features it provides.  Of course, I had to take control of it and sort things the way I want.  If anything, though, the Start screen provides a better navigation and application launcher tool than the old Start menu. What I didn’t expect was the way that Metro enhances the productivity story.  As I write this, I’ve got my desktop open with a maximised Word window.  However, the desktop extends only across about 85% of the width of my screen.  On the left hand side, I have a column that displays the new Metro email client.  This is currently showing me a list of emails for my main work account.  I can flip easily between different accounts and read my email within that same column.  As I work on documents, I want to be able to monitor my inbox with a quick glance. The desktop, of course, has its own snap feature.  I could run the desktop full screen and bring up Outlook and Word side by side.  However, this doesn’t begin to approach the convenience of snapping the Metro email client.  Consider that when I snap a window on the desktop, it initially takes up 50% of the screen.  Outlook doesn’t really know anything about snap, and doesn’t adjust to make effective use of the limited screen estate.  Even at 50% screen width, it is difficult to use, so forget about trying to use it in a Metro fashion. In any case, I am left with the prospect of having to manually adjust everything to view my email effectively alongside Word.  Worse, there is nothing stopping another window from overlapping and obscuring my email.  It becomes a struggle to keep sight of email as it arrives.  Of course, there is always ‘toast’ to notify me when things arrive, but if Outlook is obscured, this just feels intrusive. The beauty of the Metro snap feature is that my email reader now exists outside of my desktop.   The Metro app has been crafted to work well in the fixed width column as well as in full-screen.  It cannot be obscured by overlapping windows.  I still get notifications if I wish.  More importantly, it is clear that careful attention has been given to how things work when moving between applications when ‘snapped’.  If I decide, say to flick over to the Metro newsreader to catch up with current affairs, my desktop, rather than my email client, obligingly makes way for the reader.  With a simple gesture and click, or alternatively by pressing Windows-Tab, my desktop reappears. Another pleasant surprise is the way Windows 8 handles dual monitors.  It’s not just the fact that both screens now display the desktop task bar.  It’s that I can so easily move between Metro and the desktop on either screen.  I can only have Metro on one screen at a time which makes entire sense given the ‘full-screen’ nature of Metro apps.  Using dual monitors feels smoother and easier than previous versions of Windows. Overall then, I’m enjoying the Windows 8 improvements.  Strangely, for all the hype (“Windows reimagined”, etc.), my perception as a ‘productivity’ user is more one of evolution than revolution.  It all feels very familiar, but just better.

    Read the article

  • link_to passing paramater and display problem - tag feature - Ruby on Rails

    - by bgadoci
    I have gotten a great deal of help from KandadaBoggu on my last question and very very thankful for that. As we were getting buried in the comments I wanted to break this part out. I am attempting to create a tag feature on the rails blog I am developing. The relationship is Post has_many :tags and Tag belongs_to :post. Adding and deleting tags to posts are working great. In my /view/posts/index.html.erb I have a section called tags where I am successfully querying the Tags table, grouping them and displaying the count next to the tag_name (as a side note, I mistakenly called the column containing the tag name, 'tag_name' instead of just 'name' as I should have) . In addition the display of these groups are a link that is referencing the index method in the PostsController. That is where the problem is. When you navigate to /posts you get an error because there is no parameter being passed (without clicking the tag group link). I have the .empty? in there so not sure what is going wrong here. Here is the error and code: Error You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.empty? /views/posts/index.html.erb <% @tag_counts.each do |tag_name, tag_count| %> <tr> <td><%= link_to(tag_name, posts_path(:tag_name => tag_name)) %></td> <td>(<%=tag_count%>)</td> </tr> <% end %> PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'updated_at DESC', :limit => 10) @posts=Post.all(:joins => :tags,:conditions=>(params[:tag_name].empty? ? {}: { :tags => { :tag_name => params[:tag_name] }} ) ) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • Search multiple datepicker on same grid

    - by DHF
    I'm using multiple datepicker on same grid and I face the problem to get a proper result. I used 3 datepicker in 1 grid. Only the first datepicker (Order Date)is able to output proper result while the other 2 datepicker (Start Date & End Date) are not able to generate proper result. There is no problem with the query, so could you find out what's going on here? Thanks in advance! php wrapper <?php ob_start(); require_once 'config.php'; // include the jqGrid Class require_once "php/jqGrid.php"; // include the PDO driver class require_once "php/jqGridPdo.php"; // include the datepicker require_once "php/jqCalendar.php"; // Connection to the server $conn = new PDO(DB_DSN,DB_USER,DB_PASSWORD); // Tell the db that we use utf-8 $conn->query("SET NAMES utf8"); // Create the jqGrid instance $grid = new jqGridRender($conn); // Write the SQL Query $grid->SelectCommand = "SELECT c.CompanyID, c.CompanyCode, c.CompanyName, c.Area, o.OrderCode, o.Date, m.maID ,m.System, m.Status, m.StartDate, m.EndDate, m.Type FROM company c, orders o, maintenance_agreement m WHERE c.CompanyID = o.CompanyID AND o.OrderID = m.OrderID "; // Set the table to where you update the data $grid->table = 'maintenance_agreement'; // set the ouput format to json $grid->dataType = 'json'; // Let the grid create the model $grid->setPrimaryKeyId('maID'); // Let the grid create the model $grid->setColModel(); // Set the url from where we obtain the data $grid->setUrl('grouping_ma_details.php'); // Set grid caption using the option caption $grid->setGridOptions(array( "sortable"=>true, "rownumbers"=>true, "caption"=>"Group by Maintenance Agreement", "rowNum"=>20, "height"=>'auto', "width"=>1300, "sortname"=>"maID", "hoverrows"=>true, "rowList"=>array(10,20,50), "footerrow"=>false, "userDataOnFooter"=>false, "grouping"=>true, "groupingView"=>array( "groupField" => array('CompanyName'), "groupColumnShow" => array(true), //show or hide area column "groupText" =>array('<b> Company Name: {0}</b>',), "groupDataSorted" => true, "groupSummary" => array(true) ) )); if(isset($_SESSION['login_admin'])) { $grid->addCol(array( "name"=>"Action", "formatter"=>"actions", "editable"=>false, "sortable"=>false, "resizable"=>false, "fixed"=>true, "width"=>60, "formatoptions"=>array("keys"=>true), "search"=>false ), "first"); } // Change some property of the field(s) $grid->setColProperty("CompanyID", array("label"=>"ID","hidden"=>true,"width"=>30,"editable"=>false,"editoptions"=>array("readonly"=>"readonly"))); $grid->setColProperty("CompanyName", array("label"=>"Company Name","hidden"=>true,"editable"=>false,"width"=>150,"align"=>"center","fixed"=>true)); $grid->setColProperty("CompanyCode", array("label"=>"Company Code","hidden"=>true,"width"=>50,"align"=>"center")); $grid->setColProperty("OrderCode", array("label"=>"Order Code","width"=>110,"editable"=>false,"align"=>"center","fixed"=>true)); $grid->setColProperty("maID", array("hidden"=>true)); $grid->setColProperty("System", array("width"=>150,"fixed"=>true,"align"=>"center")); $grid->setColProperty("Type", array("width"=>280,"fixed"=>true)); $grid->setColProperty("Status", array("width"=>70,"align"=>"center","edittype"=>"select","editoptions"=>array("value"=>"Yes:Yes;No:No"),"fixed"=>true)); $grid->setSelect('System', "SELECT DISTINCT System, System AS System FROM master_ma_system ORDER BY System", false, true, true, array(""=>"All")); $grid->setSelect('Type', "SELECT DISTINCT Type, Type AS Type FROM master_ma_type ORDER BY Type", false, true, true, array(""=>"All")); $grid->setColProperty("StartDate", array("label"=>"Start Date","width"=>120,"align"=>"center","fixed"=>true, "formatter"=>"date", "formatoptions"=>array("srcformat"=>"Y-m-d H:i:s","newformat"=>"d M Y") )); // this is only in this case since the orderdate is set as date time $grid->setUserTime("d M Y"); $grid->setUserDate("d M Y"); $grid->setDatepicker("StartDate",array("buttonOnly"=>false)); $grid->datearray = array('StartDate'); $grid->setColProperty("EndDate", array("label"=>"End Date","width"=>120,"align"=>"center","fixed"=>true, "formatter"=>"date", "formatoptions"=>array("srcformat"=>"Y-m-d H:i:s","newformat"=>"d M Y") )); // this is only in this case since the orderdate is set as date time $grid->setUserTime("d M Y"); $grid->setUserDate("d M Y"); $grid->setDatepicker("EndDate",array("buttonOnly"=>false)); $grid->datearray = array('EndDate'); $grid->setColProperty("Date", array("label"=>"Order Date","width"=>100,"editable"=>false,"align"=>"center","fixed"=>true, "formatter"=>"date", "formatoptions"=>array("srcformat"=>"Y-m-d H:i:s","newformat"=>"d M Y") )); // this is only in this case since the orderdate is set as date time $grid->setUserTime("d M Y"); $grid->setUserDate("d M Y"); $grid->setDatepicker("Date",array("buttonOnly"=>false)); $grid->datearray = array('Date'); // This command is executed after edit $maID = jqGridUtils::GetParam('maID'); $Status = jqGridUtils::GetParam('Status'); $StartDate = jqGridUtils::GetParam('StartDate'); $EndDate = jqGridUtils::GetParam('EndDate'); $Type = jqGridUtils::GetParam('Type'); // This command is executed immediatley after edit occur. $grid->setAfterCrudAction('edit', "UPDATE maintenance_agreement SET m.Status=?, m.StartDate=?, m.EndDate=?, m.Type=? WHERE m.maID=?", array($Status,$StartDate,$EndDate,$Type,$maID)); $selectorder = <<<ORDER function(rowid, selected) { if(rowid != null) { jQuery("#detail").jqGrid('setGridParam',{postData:{CompanyID:rowid}}); jQuery("#detail").trigger("reloadGrid"); // Enable CRUD buttons in navigator when a row is selected jQuery("#add_detail").removeClass("ui-state-disabled"); jQuery("#edit_detail").removeClass("ui-state-disabled"); jQuery("#del_detail").removeClass("ui-state-disabled"); } } ORDER; // We should clear the grid data on second grid on sorting, paging, etc. $cleargrid = <<<CLEAR function(rowid, selected) { // clear the grid data and footer data jQuery("#detail").jqGrid('clearGridData',true); // Disable CRUD buttons in navigator when a row is not selected jQuery("#add_detail").addClass("ui-state-disabled"); jQuery("#edit_detail").addClass("ui-state-disabled"); jQuery("#del_detail").addClass("ui-state-disabled"); } CLEAR; $grid->setGridEvent('onSelectRow', $selectorder); $grid->setGridEvent('onSortCol', $cleargrid); $grid->setGridEvent('onPaging', $cleargrid); $grid->setColProperty("Area", array("width"=>100,"hidden"=>false,"editable"=>false,"fixed"=>true)); $grid->setColProperty("HeadCount", array("label"=>"Head Count","align"=>"center", "width"=>100,"hidden"=>false,"fixed"=>true)); $grid->setSelect('Area', "SELECT DISTINCT AreaName, AreaName AS Area FROM master_area ORDER BY AreaName", false, true, true, array(""=>"All")); $grid->setSelect('CompanyName', "SELECT DISTINCT CompanyName, CompanyName AS CompanyName FROM company ORDER BY CompanyName", false, true, true, array(""=>"All")); $custom = <<<CUSTOM jQuery("#getselected").click(function(){ var selr = jQuery('#grid').jqGrid('getGridParam','selrow'); if(selr) { window.open('http://www.smartouch-cdms.com/order.php?CompanyID='+selr); } else alert("No selected row"); return false; }); CUSTOM; $grid->setJSCode($custom); // Enable toolbar searching $grid->toolbarfilter = true; $grid->setFilterOptions(array("stringResult"=>true,"searchOnEnter"=>false,"defaultSearch"=>"cn")); // Enable navigator $grid->navigator = true; // disable the delete operation programatically for that table $grid->del = false; // we need to write some custom code when we are in delete mode. // get the grid operation parameter to see if we are in delete mode // jqGrid sends the "oper" parameter to identify the needed action $deloper = $_POST['oper']; // det the company id $cid = $_POST['CompanyID']; // if the operation is del and the companyid is set if($deloper == 'del' && isset($cid) ) { // the two tables are linked via CompanyID, so let try to delete the records in both tables try { jqGridDB::beginTransaction($conn); $comp = jqGridDB::prepare($conn, "DELETE FROM company WHERE CompanyID= ?", array($cid)); $cont = jqGridDB::prepare($conn,"DELETE FROM contact WHERE CompanyID = ?", array($cid)); jqGridDB::execute($comp); jqGridDB::execute($cont); jqGridDB::commit($conn); } catch(Exception $e) { jqGridDB::rollBack($conn); echo $e->getMessage(); } } // Enable only deleting if(isset($_SESSION['login_admin'])) { $grid->setNavOptions('navigator', array("pdf"=>true, "excel"=>true,"add"=>false,"edit"=>true,"del"=>false,"view"=>true, "search"=>true)); } else $grid->setNavOptions('navigator', array("pdf"=>true, "excel"=>true,"add"=>false,"edit"=>false,"del"=>false,"view"=>true, "search"=>true)); // In order to enable the more complex search we should set multipleGroup option // Also we need show query roo $grid->setNavOptions('search', array( "multipleGroup"=>false, "showQuery"=>true )); // Set different filename $grid->exportfile = 'Company.xls'; // Close the dialog after editing $grid->setNavOptions('edit',array("closeAfterEdit"=>true,"editCaption"=>"Update Company","bSubmit"=>"Update","dataheight"=>"auto")); $grid->setNavOptions('add',array("closeAfterAdd"=>true,"addCaption"=>"Add New Company","bSubmit"=>"Update","dataheight"=>"auto")); $grid->setNavOptions('view',array("Caption"=>"View Company","dataheight"=>"auto","width"=>"1100")); ob_end_clean(); //solve TCPDF error // Enjoy $grid->renderGrid('#grid','#pager',true, null, null, true,true); $conn = null; ?> javascript code jQuery(document).ready(function ($) { jQuery('#grid').jqGrid({ "width": 1300, "hoverrows": true, "viewrecords": true, "jsonReader": { "repeatitems": false, "subgrid": { "repeatitems": false } }, "xmlReader": { "repeatitems": false, "subgrid": { "repeatitems": false } }, "gridview": true, "url": "session_ma_details.php", "editurl": "session_ma_details.php", "cellurl": "session_ma_details.php", "sortable": true, "rownumbers": true, "caption": "Group by Maintenance Agreement", "rowNum": 20, "height": "auto", "sortname": "maID", "rowList": [10, 20, 50], "footerrow": false, "userDataOnFooter": false, "grouping": true, "groupingView": { "groupField": ["CompanyName"], "groupColumnShow": [false], "groupText": ["<b> Company Name: {0}</b>"], "groupDataSorted": true, "groupSummary": [true] }, "onSelectRow": function (rowid, selected) { if (rowid != null) { jQuery("#detail").jqGrid('setGridParam', { postData: { CompanyID: rowid } }); jQuery("#detail").trigger("reloadGrid"); // Enable CRUD buttons in navigator when a row is selected jQuery("#add_detail").removeClass("ui-state-disabled"); jQuery("#edit_detail").removeClass("ui-state-disabled"); jQuery("#del_detail").removeClass("ui-state-disabled"); } }, "onSortCol": function (rowid, selected) { // clear the grid data and footer data jQuery("#detail").jqGrid('clearGridData', true); // Disable CRUD buttons in navigator when a row is not selected jQuery("#add_detail").addClass("ui-state-disabled"); jQuery("#edit_detail").addClass("ui-state-disabled"); jQuery("#del_detail").addClass("ui-state-disabled"); }, "onPaging": function (rowid, selected) { // clear the grid data and footer data jQuery("#detail").jqGrid('clearGridData', true); // Disable CRUD buttons in navigator when a row is not selected jQuery("#add_detail").addClass("ui-state-disabled"); jQuery("#edit_detail").addClass("ui-state-disabled"); jQuery("#del_detail").addClass("ui-state-disabled"); }, "datatype": "json", "colModel": [ { "name": "Action", "formatter": "actions", "editable": false, "sortable": false, "resizable": false, "fixed": true, "width": 60, "formatoptions": { "keys": true }, "search": false }, { "name": "CompanyID", "index": "CompanyID", "sorttype": "int", "label": "ID", "hidden": true, "width": 30, "editable": false, "editoptions": { "readonly": "readonly" } }, { "name": "CompanyCode", "index": "CompanyCode", "sorttype": "string", "label": "Company Code", "hidden": true, "width": 50, "align": "center", "editable": true }, { "name": "CompanyName", "index": "CompanyName", "sorttype": "string", "label": "Company Name", "hidden": true, "editable": false, "width": 150, "align": "center", "fixed": true, "edittype": "select", "editoptions": { "value": "Aquatex Industries:Aquatex Industries;Benithem Sdn Bhd:Benithem Sdn Bhd;Daily Bakery Sdn Bhd:Daily Bakery Sdn Bhd;Eurocor Asia Sdn Bhd:Eurocor Asia Sdn Bhd;Evergrown Technology:Evergrown Technology;Goldpar Precision:Goldpar Precision;MicroSun Technologies Asia:MicroSun Technologies Asia;NCI Industries Sdn Bhd:NCI Industries Sdn Bhd;PHHP Marketing:PHHP Marketing;Smart Touch Technology:Smart Touch Technology;THOSCO Treatech:THOSCO Treatech;YHL Trading (Johor) Sdn Bhd:YHL Trading (Johor) Sdn Bhd;Zenxin Agri-Organic Food:Zenxin Agri-Organic Food", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Aquatex Industries:Aquatex Industries;Benithem Sdn Bhd:Benithem Sdn Bhd;Daily Bakery Sdn Bhd:Daily Bakery Sdn Bhd;Eurocor Asia Sdn Bhd:Eurocor Asia Sdn Bhd;Evergrown Technology:Evergrown Technology;Goldpar Precision:Goldpar Precision;MicroSun Technologies Asia:MicroSun Technologies Asia;NCI Industries Sdn Bhd:NCI Industries Sdn Bhd;PHHP Marketing:PHHP Marketing;Smart Touch Technology:Smart Touch Technology;THOSCO Treatech:THOSCO Treatech;YHL Trading (Johor) Sdn Bhd:YHL Trading (Johor) Sdn Bhd;Zenxin Agri-Organic Food:Zenxin Agri-Organic Food", "separator": ":", "delimiter": ";" } }, { "name": "Area", "index": "Area", "sorttype": "string", "width": 100, "hidden": true, "editable": false, "fixed": true, "edittype": "select", "editoptions": { "value": "Cemerlang:Cemerlang;Danga Bay:Danga Bay;Kulai:Kulai;Larkin:Larkin;Masai:Masai;Nusa Cemerlang:Nusa Cemerlang;Nusajaya:Nusajaya;Pasir Gudang:Pasir Gudang;Pekan Nenas:Pekan Nenas;Permas Jaya:Permas Jaya;Pontian:Pontian;Pulai:Pulai;Senai:Senai;Skudai:Skudai;Taman Gaya:Taman Gaya;Taman Johor Jaya:Taman Johor Jaya;Taman Molek:Taman Molek;Taman Pelangi:Taman Pelangi;Taman Sentosa:Taman Sentosa;Tebrau 4:Tebrau 4;Ulu Tiram:Ulu Tiram", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Cemerlang:Cemerlang;Danga Bay:Danga Bay;Kulai:Kulai;Larkin:Larkin;Masai:Masai;Nusa Cemerlang:Nusa Cemerlang;Nusajaya:Nusajaya;Pasir Gudang:Pasir Gudang;Pekan Nenas:Pekan Nenas;Permas Jaya:Permas Jaya;Pontian:Pontian;Pulai:Pulai;Senai:Senai;Skudai:Skudai;Taman Gaya:Taman Gaya;Taman Johor Jaya:Taman Johor Jaya;Taman Molek:Taman Molek;Taman Pelangi:Taman Pelangi;Taman Sentosa:Taman Sentosa;Tebrau 4:Tebrau 4;Ulu Tiram:Ulu Tiram", "separator": ":", "delimiter": ";" } }, { "name": "OrderCode", "index": "OrderCode", "sorttype": "string", "label": "Order No.", "width": 110, "editable": false, "align": "center", "fixed": true }, { "name": "Date", "index": "Date", "sorttype": "date", "label": "Order Date", "width": 100, "editable": false, "align": "center", "fixed": true, "formatter": "date", "formatoptions": { "srcformat": "Y-m-d H:i:s", "newformat": "d M Y" }, "editoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "searchoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } } }, { "name": "maID", "index": "maID", "sorttype": "int", "key": true, "hidden": true, "editable": true }, { "name": "System", "index": "System", "sorttype": "string", "width": 150, "fixed": true, "align": "center", "edittype": "select", "editoptions": { "value": "Payroll:Payroll;TMS:TMS;TMS & Payroll:TMS & Payroll", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Payroll:Payroll;TMS:TMS;TMS & Payroll:TMS & Payroll", "separator": ":", "delimiter": ";" }, "editable": true }, { "name": "Status", "index": "Status", "sorttype": "string", "width": 70, "align": "center", "edittype": "select", "editoptions": { "value": "Yes:Yes;No:No" }, "fixed": true, "editable": true }, { "name": "StartDate", "index": "StartDate", "sorttype": "date", "label": "Start Date", "width": 120, "align": "center", "fixed": true, "formatter": "date", "formatoptions": { "srcformat": "Y-m-d H:i:s", "newformat": "d M Y" }, "editoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "searchoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "editable": true }, { "name": "EndDate", "index": "EndDate", "sorttype": "date", "label": "End Date", "width": 120, "align": "center", "fixed": true, "formatter": "date", "formatoptions": { "srcformat": "Y-m-d H:i:s", "newformat": "d M Y" }, "editoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "searchoptions": { "dataInit": function(el) { setTimeout(function() { if (jQuery.ui) { if (jQuery.ui.datepicker) { jQuery(el).datepicker({ "disabled": false, "dateFormat": "dd M yy" }); jQuery('.ui-datepicker').css({ 'font-size': '75%' }); } } }, 100); } }, "editable": true }, { "name": "Type", "index": "Type", "sorttype": "string", "width": 530, "fixed": true, "edittype": "select", "editoptions": { "value": "Comprehensive MA:Comprehensive MA;FOC service, 20% spare part discount:FOC service, 20% spare part discount;Standard Package, FOC 1 time service, 20% spare part discount:Standard Package, FOC 1 time service, 20% spare part discount;Standard Package, FOC 2 time service, 20% spare part discount:Standard Package, FOC 2 time service, 20% spare part discount;Standard Package, FOC 3 time service, 20% spare part discount:Standard Package, FOC 3 time service, 20% spare part discount;Standard Package, FOC 4 time service, 20% spare part discount:Standard Package, FOC 4 time service, 20% spare part discount;Standard Package, FOC 6 time service, 20% spare part discount:Standard Package, FOC 6 time service, 20% spare part discount;Standard Package, no free:Standard Package, no free", "separator": ":", "delimiter": ";" }, "stype": "select", "searchoptions": { "value": ":All;Comprehensive MA:Comprehensive MA;FOC service, 20% spare part discount:FOC service, 20% spare part discount;Standard Package, FOC 1 time service, 20% spare part discount:Standard Package, FOC 1 time service, 20% spare part discount;Standard Package, FOC 2 time service, 20% spare part discount:Standard Package, FOC 2 time service, 20% spare part discount;Standard Package, FOC 3 time service, 20% spare part discount:Standard Package, FOC 3 time service, 20% spare part discount;Standard Package, FOC 4 time service, 20% spare part discount:Standard Package, FOC 4 time service, 20% spare part discount;Standard Package, FOC 6 time service, 20% spare part discount:Standard Package, FOC 6 time service, 20% spare part discount;Standard Package, no free:Standard Package, no free", "separator": ":", "delimiter": ";" }, "editable": true } ], "postData": { "oper": "grid" }, "prmNames": { "page": "page", "rows": "rows", "sort": "sidx", "order": "sord", "search": "_search", "nd": "nd", "id": "maID", "filter": "filters", "searchField": "searchField", "searchOper": "searchOper", "searchString": "searchString", "oper": "oper", "query": "grid", "addoper": "add", "editoper": "edit", "deloper": "del", "excel": "excel", "subgrid": "subgrid", "totalrows": "totalrows", "autocomplete": "autocmpl" }, "loadError": function(xhr, status, err) { try { jQuery.jgrid.info_dialog(jQuery.jgrid.errors.errcap, '<div class="ui-state-error">' + xhr.responseText + '</div>', jQuery.jgrid.edit.bClose, { buttonalign: 'right' } ); } catch(e) { alert(xhr.responseText); } }, "pager": "#pager" }); jQuery('#grid').jqGrid('navGrid', '#pager', { "edit": true, "add": false, "del": false, "search": true, "refresh": true, "view": true, "excel": true, "pdf": true, "csv": false, "columns": false }, { "drag": true, "resize": true, "closeOnEscape": true, "dataheight": "auto", "errorTextFormat": function (r) { return r.responseText; }, "closeAfterEdit": true, "editCaption": "Update Company", "bSubmit": "Update" }, { "drag": true, "resize": true, "closeOnEscape": true, "dataheight": "auto", "errorTextFormat": function (r) { return r.responseText; }, "closeAfterAdd": true, "addCaption": "Add New Company", "bSubmit": "Update" }, { "errorTextFormat": function (r) { return r.responseText; } }, { "drag": true, "closeAfterSearch": true, "multipleSearch": true }, { "drag": true, "resize": true, "closeOnEscape": true, "dataheight": "auto", "Caption": "View Company", "width": "1100" } ); jQuery('#grid').jqGrid('navButtonAdd', '#pager', { id: 'pager_excel', caption: '', title: 'Export To Excel', onClickButton: function (e) { try { jQuery("#grid").jqGrid('excelExport', { tag: 'excel', url: 'session_ma_details.php' }); } catch (e) { window.location = 'session_ma_details.php?oper=excel'; } }, buttonicon: 'ui-icon-newwin' }); jQuery('#grid').jqGrid('navButtonAdd', '#pager', { id: 'pager_pdf', caption: '', title: 'Export To Pdf', onClickButton: function (e) { try { jQuery("#grid").jqGrid('excelExport', { tag: 'pdf', url: 'session_ma_details.php' }); } catch (e) { window.location = 'session_ma_details.php?oper=pdf'; } }, buttonicon: 'ui-icon-print' }); jQuery('#grid').jqGrid('filterToolbar', { "stringResult": true, "searchOnEnter": false, "defaultSearch": "cn" }); jQuery("#getselected").click(function () { var selr = jQuery('#grid').jqGrid('getGridParam', 'selrow'); if (selr) { window.open('http://www.smartouch-cdms.com/order.php?CompanyID=' + selr); } else alert("No selected row"); return false; }); });

    Read the article

  • Why does a subTable break a4j:commandLink's reRender?

    - by Tom
    Here is a minimal rich:dataTable example with an a4j:commandLink inside. When clicked, it sends an AJAX request to my bean and reRenders the dataTable. <rich:dataTable id="dataTable" value="#{carManager.all}" var="item"> <rich:column> <f:facet name="header">name</f:facet> <h:outputText value="#{item.name}" /> </rich:column> <rich:column> <f:facet name="header">action</f:facet> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:dataTable> The exmaple obove works fine so far. But when I add a rich:subTable (grouping the cars by garage for example) to the table, reRendering fails... <rich:dataTable id="dataTable" value="#{garageManager.all}" var="garage"> <f:facet name="header"> <rich:columnGroup> <rich:column>name</rich:column> <rich:column>action</rich:column> </rich:columnGroup> </f:facet> <rich:column colspan="2"> <h:outputText value="#{garage.name}" /> </rich:column> <rich:subTable value="#{garage.cars}" var="car"> <rich:column><h:ouputText value="#{car.name}" /></rich:column> <rich:column> <a4j:commandLink reRender="dataTable" value="Delete" action="#{carForm.delete}"> <f:setPropertyActionListener value="#{item.id}" target="#{carForm.id}" /> <f:param name="from" value="list" /> </a4j:commandLink> </rich:column> </rich:column> </rich:dataTable> Now the rich:dataTable is not rerendered but the item gets deleted since the item does not show up after a manual page refresh. Why does subTable break support for reRender-ing here? Tanks Tom

    Read the article

  • Reading Source Code Aloud

    - by Jon Purdy
    After seeing this question, I got to thinking about the various challenges that blind programmers face, and how some of them are applicable even to sighted programmers. Particularly, the problem of reading source code aloud gives me pause. I have been programming for most of my life, and I frequently tutor fellow students in programming, most often in C++ or Java. It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal. On the one hand, an idiomatic translation is only useful to a programmer who can de-translate back into the relevant programming code—which is not usually the case when tutoring a student. In turn, education (or simply getting someone up to speed on a project) is the most common situation in which source is read aloud, and there is a very small margin for error. On the other hand, a literal specification is aggravatingly slow. It takes far far longer to say "pound, include, left angle bracket, iostream, right angle bracket, newline" than it does to simply type #include <iostream>. Indeed, most experienced C++ programmers would read this merely as "include iostream", but again, inexperienced programmers abound and literal specifications are sometimes necessary. So I've had an idea for a potential solution to this problem. In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it. So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised. Speakers of any language would be able to verbally convey C++ code, and due to the regularity and fixity of the language, speech-to-text software could be optimised to accept C++ speech with a high degree of accuracy. So my question is twofold: first, is my solution feasible; and second, does anyone else have other potential solutions? I intend to take suggestions from here and use them to produce a formal paper with an example implementation of my solution.

    Read the article

  • SQLiteException and SQLite error near "(": syntax error with Subsonic ActiveRecord

    - by nvuono
    I ran into an interesting error with the following LiNQ query using LiNQPad and when using Subsonic 3.0.x w/ActiveRecord within my project and wanted to share the error and resolution for anyone else who runs into it. The linq statement below is meant to group entries in the tblSystemsValues collection into their appropriate system and then extract the system with the highest ID. from ksf in KeySafetyFunction where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystems on ksf.ID equals sys.KeySafetyFunction join xval in (from t in tblSystemsValues group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g=>g.ID), MaxText = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID == groupedT.Max(g=>g.ID)).Checked }) on sys.ID equals xval.sysId select new {KSFDesc=ksf.Description, sys.Description, xval.MaxText, xval.MaxChecked} On its own, the subquery for grouping into groupedT works perfectly and the query to match up KeySafetyFunctions with their System in tblSystems also works perfectly on its own. However, when trying to run the completed query in linqpad or within my project I kept running into a SQLiteException SQLite Error Near "(" First I tried splitting the queries up within my project because I knew that I could just run a foreach loop over the results if necessary. However, I continued to receive the same exception! I eventually separated the query into three separate parts before I realized that it was the lazy execution of the queries that was killing me. It then became clear that adding the .ToList() specifier after the myProtectedSystem query below was the key to avoiding the lazy execution after combining and optimizing the query and being able to get my results despite the problems I encountered with the SQLite driver. // determine the max Text/Checked values for each system in tblSystemsValue var myProtectedValue = from t in tblSystemsValue.All() group t by t.tblSystems_ID into groupedT select new { sysId = groupedT.Key, MaxID = groupedT.Max(g => g.ID), MaxText = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).TextValue, MaxChecked = groupedT.First(gt2 => gt2.ID ==groupedT.Max(g => g.ID)).Checked}; // get the system description information and filter by Unit/Condition ID var myProtectedSystem = (from ksf in KeySafetyFunction.All() where ksf.Unit == 2 && ksf.Condition_ID == 1 join sys in tblSystem.All() on ksf.ID equals sys.KeySafetyFunction select new {KSFDesc = ksf.Description, sys.Description, sys.ID}).ToList(); // finally join everything together AFTER forcing execution with .ToList() var joined = from protectedSys in myProtectedSystem join protectedVal in myProtectedValue on protectedSys.ID equals protectedVal.sysId select new {protectedSys.KSFDesc, protectedSys.Description, protectedVal.MaxChecked, protectedVal.MaxText}; // print the gratifying debug results foreach(var protectedItem in joined) { System.Diagnostics.Debug.WriteLine(protectedItem.Description + ", " + protectedItem.KSFDesc + ", " + protectedItem.MaxText + ", " + protectedItem.MaxChecked); }

    Read the article

  • jQuery Validation plugin: checkbox groups and error message issues

    - by boomturn
    I've put together a form using the jQuery Validation plugin, and all inputs are fine with working validation and error messages – except for checkboxes. I have two checkbox problems. The first is that the Validation plugin API doesn't seem to handle checkboxes in grouped contexts (I'm using fieldsets for grouping). Found several approaches to the issue here, including reference to a post by Rebecca Murphey for a more general case using a custom method and class. Adapting that to this situation: jQuery.validator.addMethod('required_group', function(val, el) { var fieldParent = $(el).closest('fieldset'); return fieldParent.find('.required_group:checked').length; }); jQuery.validator.addClassRules('required_group', { 'required_group': true }); jQuery.validator.messages.required_group = 'Please check at least one box.'; This sort of works, but produces error messages on every checkbox, and only removes them as each box is clicked. This is not an acceptable situation for the user, who can only get rid of them by clicking false positives. Ideally, I guess what's needed is something to prevent or eliminate extra messages before they are displayed and use errorPlacement to display a single error message in the parent fieldset, that would then be removed with a click on any checkbox. Less ideally, maybe they would all display but an event handler could turn off the full set of redundant messages with a click, which is what this approach offered by tvanfosson appears to do. (Another customized approach here, but I couldn't get it to work.) I guess I should also note this form requires the checkboxes to have different names. My second problem is that one of the fieldsets with checkboxes in the form also contains a nested fieldset of checkboxes under one of the outer checkboxes. So in addition to the first-level one-box-checked requirement, if the particular checkbox containing the second-level checkboxes is checked, then at least one of the second-level boxes must be checked. Not sure about the right approach; I'm guessing what needs to happen (following the above scheme) is that the trigger checkbox would use toggleClass to add/remove 'required_group' class to all the checkboxes in the subfield, which would then (hopefully) behave the same as the parent field: $("#triggerCheckbox").click(function () { $(this).find(":checkbox").toggleClass("required_group"); }); Any suggestions or ideas welcome. I'm well beyond my limited jQuery skills on this one and would be happy to hear that I missed simple, elegant and/or obvious ways to do this!

    Read the article

  • SQL Select - adding field to Select is changing the results

    - by nycdan
    I'm stumped by this SQL problem that I suspect will be easy pickings for someone out there. I have a table that contains rows representing several daily lists of ranked items. The relevent fields are as follows: ID, ListID, ItemID, ItemName, ItemRank, Date. I have a query that returns the items that were on a list yesterday but not today (Items Off List) as follows: Select ItemID, ListID, ItemName, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, COUNT(ItemName) Basically I'm looking for records where the max date is yesterday. It works fine and shows the number of days each item was previously on the list (although not necessarily consecutively, but that's fine for now). The problem is when I try to add ranking to see what yesterday's rank was. I tried the following: Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName, ranking Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, ranking, COUNT(ItemName) This returns a great deal more records than the previous query so something isn't right with it. I want the same number of records, but with the ranking included. I can get the rank by doing a self-join with a subquery and getting records where the ItemID occurs yesterday but not today - but then I don't know how to get the Count any more. Appreciation in advance for any help with this. ======== SOLVED ============== Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list from Table T Where date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 and T.ItemID Not In (select T.ItemID from Table T join Table T2 on T.ItemID = T2.ItemID and T.ListID = T2.ListID where T.date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and T2.date = convert (varchar(10),getdate(),101) and T.ListID = 1) Group by ItemID, ListID, ItemName, ranking Basically, what I did was create a subquery that finds all items that appear in both days, and finds items that appeared yesterday but are not in the set of items that appeared both days. Then I was able to do the aggregate function and grouping correctly. I would NOT be surprised if this is more convoluted than necessary but I understand it and can modify it as needed and performance doesn't seem to be an issue. Thanks everyone for the assist.

    Read the article

  • LinqToXML why does my object go out of scope? Also should I be doing a group by?

    - by Kettenbach
    Hello All, I have an IEnumerable<someClass>. I need to transform it into XML. There is a property called 'ZoneId'. I need to write some XML based on this property, then I need some decendent elements that provide data relevant to the ZoneId. I know I need some type of grouping. Here's what I have attempted thus far without much success. **inventory is an IEnumerable<someClass>. So I query inventory for unique zones. This works ok. var zones = inventory.Select(c => new { ZoneID = c.ZoneId , ZoneName = c.ZoneName , Direction = c.Direction }).Distinct(); No I want to create xml based on zones and place. ***place is a property of 'someClass'. var xml = new XElement("MSG_StationInventoryList" , new XElement("StationInventory" , zones.Select(station => new XElement("station-id", station.ZoneID) , new XElement("station-name", station.ZoneName)))); This does not compile as "station" is out of scope when I try to add the "station-name" element. However is I remove the paren after 'ZoneId', station is in scope and I retreive the station-name. Only problem is the element is then a decendant of 'station-id'. This is not the desired output. They should be siblings. What am I doing wrong? Lastly after the "station-name" element, I will need another complex type which is a collection. Call it "places'. It will have child elements called "place". its data will come from the IEnumerable and I will only want "places" that have the "ZoneId" for the current zone. Can anyone point me in the right direction? Is it a mistake select distinct zones from the original IEnumerable? This object has all the data I need within it. I just need to make it heirarchical. Thanks for any pointers all. Cheers, Chris in San Diego

    Read the article

  • Complex SQL query with group by and two rows in one

    - by Ricket
    Okay, I need help. I'm usually pretty good at SQL queries but this one baffles me. By the way, this is not a homework assignment, it's a real situation in an Access database and I've written the requirements below myself. Here is my table layout. It's in Access 2007 if that matters; I'm writing the query using SQL. Id (primary key) PersonID (foreign key) EventDate NumberOfCredits SuperCredits (boolean) There are events that people go to. They can earn normal credits, or super credits, or both at one event. The SuperCredits column is true if the row represents a number of super credits earned at the event, or false if it represents normal credits. So for example, if there is an event which person 174 attends, and they earn 3 normal credits and 1 super credit at the event, the following two rows would be added to the table: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 3 false 2 174 1/1/2010 1 true It is also possible that the person could have done two separate things at the event, so there might be more than two columns for one event, and it might look like this: ID PersonID EventDate NumberOfCredits SuperCredits 1 174 1/1/2010 1 false 2 174 1/1/2010 2 false 3 174 1/1/2010 1 true Now we want to print out a report. Here will be the columns of the report: PersonID LastEventDate NumberOfNormalCredits NumberOfSuperCredits The report will have one row per person. The row will show the latest event that the person attended, and the normal and super credits that the person earned at that event. What I am asking of you is to write, or help me write, the SQL query to SELECT the data and GROUP BY and SUM() and whatnot. Or, let me know if this is for some reason not possible, and how to organize my data to make it possible. This is extremely confusing and I understand if you do not take the time to puzzle through it. I've tried to simplify it as much as possible, but definitely ask any questions if you give it a shot and need clarification. I'll be trying to figure it out but I'm having a real hard time with it, this is grouping beyond my experience...

    Read the article

  • Using MySQL to generate daily sales reports with filled gaps, grouped by currency

    - by Shane O'Grady
    I'm trying to create what I think is a relatively basic report for an online store, using MySQL 5.1.45 The store can receive payment in multiple currencies. I have created some sample tables with data and am trying to generate a straightforward tabular result set grouped by date and currency so that I can graph these figures. I want to see each currency that is available per date, with a 0 in the result if there were no sales in that currency for that day. If I can get that to work I want to do the same but also grouped by product id. In the sample data I have provided there are only 3 currencies and 2 product ids, but in practice there can be any number of each. I can correctly group by date, but then when I add a grouping by currency my query does not return what I want. I based my work off this article. My reporting query, grouped only by date: SELECT calendar.datefield AS date, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date Now grouped by date and currency: SELECT calendar.datefield AS date, orders.currency_id, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date, orders.currency_id The results I am getting (grouped by date and currency): +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | NULL | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The results I want: +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | 3 | 0.00 | | 2009-08-16 | 45 | 0.00 | | 2009-08-16 | 49 | 0.00 | | 2009-08-17 | 3 | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 45 | 0.00 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The schema and data I am using in my tests: CREATE TABLE orders ( id INT PRIMARY KEY AUTO_INCREMENT, order_date DATETIME, order_id INT, product_id INT, currency_id INT, order_value DECIMAL(9,2), customer_id INT ); INSERT INTO orders (order_date, order_id, product_id, currency_id, order_value, customer_id) VALUES ('2009-08-15 10:20:20', '123', '1', '45', '12.50', '322'), ('2009-08-15 12:30:20', '124', '1', '49', '122.60', '400'), ('2009-08-15 13:41:20', '125', '1', '3', '40.97', '324'), ('2009-08-15 10:20:20', '126', '2', '45', '12.50', '345'), ('2009-08-15 13:41:20', '131', '2', '3', '40.97', '756'), ('2009-08-17 10:20:20', '3234', '1', '45', '12.50', '1322'), ('2009-08-17 10:20:20', '4642', '2', '45', '12.50', '1345'), ('2009-08-17 12:30:20', '23', '2', '49', '122.60', '3142'), ('2009-08-18 12:30:20', '2131', '1', '49', '122.60', '4700'), ('2009-08-18 13:41:20', '4568', '1', '3', '40.97', '3274'), ('2009-08-18 12:30:20', '956', '2', '49', '122.60', '3542'), ('2009-08-18 13:41:20', '443', '2', '3', '40.97', '7556'); CREATE TABLE currency ( id INT PRIMARY KEY, name VARCHAR(255) ); INSERT INTO currency (id, name) VALUES (3, 'Euro'), (45, 'US Dollar'), (49, 'CA Dollar'); CREATE TABLE calendar (datefield DATE); DELIMITER | CREATE PROCEDURE fill_calendar(start_date DATE, end_date DATE) BEGIN DECLARE crt_date DATE; SET crt_date=start_date; WHILE crt_date < end_date DO INSERT INTO calendar VALUES(crt_date); SET crt_date = ADDDATE(crt_date, INTERVAL 1 DAY); END WHILE; END | DELIMITER ; CALL fill_calendar('2008-01-01', '2011-12-31');

    Read the article

  • Detecting abuse for post rating system

    - by Steven smethurst
    I am using a wordpress plugin called "GD Star Rating" to allow my users to vote on stories that I post to one of my websites. http://everydayfiction.com/ Recently we have been having a lot of abuse of the system. Stories that have obviously been voted up artificially. "GD Star Rating" creates some detailed logs when a user votes on a story. Including; IP, Time of vote, and user_adgent, ect.. For example this story has 181 votes with an average of 5.7 http://www.everydayfiction.com/snowman-by-shaun-simon/ Most other stories only get around ~40 votes each day. At first I thought that the story got on to a social bookmarking site Digg, Stumbleupon ect... but after checking the logs I found that this story is getting the same amount of traffic that a normal story gets ~2k-3k. I checked if all the votes for this perpendicular story where coming from a the same IP address. I could see this happening if a user was at a school's computer lab using all their lab computers to vote up this story. Not one duplicate IP address in the log for this story. SELECT ip, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY (ip ) ORDER BY count DESC Next I thought that a use might be using a proxy to vote up a story. I checked this by grouping all the browser user_agent together to see if there a single browser voting in a perpendicular way. At most 7 users where using a similar browser but voted sporadically (1-5), no evidence of wrong doing. SELECT user_agent, COUNT(*) as count FROM wp_gdsr_votes_log WHERE id=3932 GROUP BY ( user_agent) ORDER BY count DESC I check was to see if all the votes came in at a once. Maybe someone has a really interesting bot that can change the user_adgent and uses proxies, ect... At most 5 votes came with in 2 mins of each other. It doesn't seem to be any regularity on how people vote (IE a 5 vote does not come in once a min) SELECT * FROM wp_gdsr_votes_log WHERE id =3932 AND vote=5 ORDER BY wp_gdsr_votes_log.voted DESC The obvious solution to this problem is to force people to login before they are allowed to vote. But I would prefer to not have to go down that route unless it is absolutely necessary. I'm looking for suggestions on things to test for to detect the abuse.

    Read the article

  • Adding x11vnc as a Solaris SMF service

    - by rojanu
    I am trying add x11vnc as SMF service but cannot get service to start. I tried googling but couldn't find anything that could help me. Here is the startup script #!/sbin/sh # # Copyright (c) 1995, 1997-1999 by Sun Microsystems, Inc. # All rights reserved. # #ident "@(#)x11vnc 1.14 06/11/17 SMI" case "$1" in 'start') #/usr/local/bin/x11vnc -geometry 1280x1024 -noshm -display :0 -ncache 10 -noshm -shared -forever -o /tmp/vnc_remote.log -bg /usr/local/bin/x11vnc -unixpw -ncache 10 -display :0 -noshm -shared -forever -o /tmp/vnc_remote.log ;; 'stop') /usr/bin/pkill -x -u 0 x11vnc ;; *) echo "Usage: $0 { start | stop }" ;; esac exit 0 and here is the manifest file <?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='manifest' name='vnc'> <service name='application/x11vnc' type='service' version='0'> <create_default_instance enabled='true'/> <single_instance/> <dependency name='docusp' grouping='require_all' restart_on='none' type='service'> <service_fmri value='svc:/milestone/multi-user-server:default'/> </dependency> <exec_method name='start' type='method' exec='/lib/svc/method/x11vnc' timeout_seconds='0'> <method_context/> </exec_method> <exec_method name='stop' type='method' exec=':true' timeout_seconds='10'> <method_context/> </exec_method> <stability value='Evolving' /> <property_group name='startd' type='framework'> <propval name='ignore_error' type='astring' value='core,signal'/> </property_group> </service> </service_bundle> and the log file Usage: /lib/svc/method/x11vnc { start | stop } [ Nov 16 19:35:52 Method "start" exited with status 0 ] [ Nov 16 19:35:52 Stopping because all processes in service exited. ] [ Nov 16 19:35:52 Executing stop method (:kill) ] [ Nov 16 19:35:52 Executing start method ("/lib/svc/method/x11vnc") ] Usage: /lib/svc/method/x11vnc { start | stop } [ Nov 16 19:35:52 Method "start" exited with status 0 ] [ Nov 16 19:35:52 Stopping because all processes in service exited. ] [ Nov 16 19:35:52 Executing stop method (:kill) ] [ Nov 16 19:35:52 Executing start method ("/lib/svc/method/x11vnc") ] Usage: /lib/svc/method/x11vnc { start | stop } [ Nov 16 19:35:52 Method "start" exited with status 0 ] [ Nov 16 19:35:52 Stopping because all processes in service exited. ] [ Nov 16 19:35:52 Executing stop method (:kill) ] [ Nov 16 19:35:52 Restarting too quickly, changing state to maintenance ] Any Ideas?

    Read the article

  • How do I check for the existence of an external file with XSL?

    - by LOlliffe
    I've found a lot of examples that reference Java and C for this, but how do I, or can I, check for the existence of an external file with XSL. First, I realize that this is only a snippet, but it's part of a huge stylesheet, so I'm hoping it's enough to show my issue. <!-- Use this template for Received SMSs --> <xsl:template name="ReceivedSMS"> <!-- Set/Declare "SMSname" variable (local, evaluates per instance) --> <xsl:variable name="SMSname"> <xsl:value-of select=" following-sibling::Name"/> </xsl:variable> <fo:table font-family="Arial Unicode MS" font-size="8pt" text-align="start"> <fo:table-column column-width=".75in"/> <fo:table-column column-width="6.75in"/> <fo:table-body> <fo:table-row> <!-- Cell contains "speakers" icon --> <fo:table-cell display-align="after"> <fo:block text-align="start"> <fo:external-graphic src="../images/{$SMSname}.jpg" content-height="0.6in"/> What I'd like to do, is put in an "if" statement, surronding the {$SMSname}.jpg line. That is: <fo:block text-align="start"> <xsl:if test="exists( the external file {$SMSname}.jpg)"> <fo:external-graphic src="../images/{$SMSname}.jpg" content-height="0.6in"/> </xsl:if> <xsl:if test="not(exists( the external file {$SMSname}.jpg))"> <fo:external-graphic src="../images/unknown.jpg" content-height="0.6in"/> </xsl:if> </fo:block> Because of "grouping", etc., I'm using XSLT 2.0. I hope that this is something that can be done. I hope even more that it's something simple. As always, thanks in advance for any help. LO

    Read the article

  • How do I connect multiple sortable lists to each other in jQuery UI?

    - by Abs
    I'm new to jQuery, and I'm totally struggling with using jQuery UI's sortable. I'm trying to put together a page to facilitate grouping and ordering of items. My page has a list of groups, and each group contains a list of items. I want to allow users to be able to do the following: 1. Reorder the groups 2. Reorder the items within the groups 3. Move the items between the groups The first two requirements are no problem. I'm able to sort them just fine. The problem comes in with the third requirement. I just can't connect those lists to each other. Some code might help. Here's the markup. <ul id="groupsList" class="groupsList"> <li id="group1" class="group">Group 1 <ul id="groupItems1" class="itemsList"> <li id="item1-1" class="item">Item 1.1</li> <li id="item1-2" class="item">Item 1.2</li> </ul> </li> <li id="group2" class="group">Group 2 <ul id="groupItems2" class="itemsList"> <li id="item2-1" class="item">Item 2.1</li> <li id="item2-2" class="item">Item 2.2</li> </ul> </li> <li id="group3" class="group">Group 3 <ul id="groupItems3" class="itemsList"> <li id="item3-1" class="item">Item 3.1</li> <li id="item3-2" class="item">Item 3.2</li> </ul> </li> </ul> I was able to sort the lists by putting $('#groupsList').sortable({}); and $('.itemsList').sortable({}); in the document ready function. I tried using the connectWith option for sortable to make it work, but I failed spectacularly. What I'd like to do is have the every groupItemsX list connected to every groupItemsX list but itself. How should I do that?

    Read the article

  • How does Sentry aggregate errors?

    - by Hugo Rodger-Brown
    I am using Sentry (in a django project), and I'd like to know how I can get the errors to aggregate properly. I am logging certain user actions as errors, so there is no underlying system exception, and am using the culprit attribute to set a friendly error name. The message is templated, and contains a common message ("User 'x' was unable to perform action because 'y'"), but is never exactly the same (different users, different conditions). Sentry clearly uses some set of attributes under the hood to determine whether to aggregate errors as the same exception, but despite having looked through the code, I can't work out how. Can anyone short-cut my having to dig further into the code and tell me what properties I need to set in order to manage aggregation as I would like? [UPDATE 1: event grouping] This line appears in sentry.models.Group: class Group(MessageBase): """ Aggregated message which summarizes a set of Events. """ ... class Meta: unique_together = (('project', 'logger', 'culprit', 'checksum'),) ... Which makes sense - project, logger and culprit I am setting at the moment - the problem is checksum. I will investigate further, however 'checksum' suggests that binary equivalence, which is never going to work - it must be possible to group instances of the same exception, with differenct attributes? [UPDATE 2: event checksums] The event checksum comes from the sentry.manager.get_checksum_from_event method: def get_checksum_from_event(event): for interface in event.interfaces.itervalues(): result = interface.get_hash() if result: hash = hashlib.md5() for r in result: hash.update(to_string(r)) return hash.hexdigest() return hashlib.md5(to_string(event.message)).hexdigest() Next stop - where do the event interfaces come from? [UPDATE 3: event interfaces] I have worked out that interfaces refer to the standard mechanism for describing data passed into sentry events, and that I am using the standard sentry.interfaces.Message and sentry.interfaces.User interfaces. Both of these will contain different data depending on the exception instance - and so a checksum will never match. Is there any way that I can exclude these from the checksum calculation? (Or at least the User interface value, as that has to be different - the Message interface value I could standardise.) [UPDATE 4: solution] Here are the two get_hash functions for the Message and User interfaces respectively: # sentry.interfaces.Message def get_hash(self): return [self.message] # sentry.interfaces.User def get_hash(self): return [] Looking at these two, only the Message.get_hash interface will return a value that is picked up by the get_checksum_for_event method, and so this is the one that will be returned (hashed etc.) The net effect of this is that the the checksum is evaluated on the message alone - which in theory means that I can standardise the message and keep the user definition unique. I've answered my own question here, but hopefully my investigation is of use to others having the same problem. (As an aside, I've also submitted a pull request against the Sentry documentation as part of this ;-)) (Note to anyone using / extending Sentry with custom interfaces - if you want to avoid your interface being use to group exceptions, return an empty list.)

    Read the article

  • Repeat Customers Each Year (Retention)

    - by spazzie
    I've been working on this and I don't think I'm doing it right. |D Our database doesn't keep track of how many customers we retain so we looked for an alternate method. It's outlined in this article. It suggests you have this table to fill in: Year Number of Customers Number of customers Retained in 2009 Percent (%) Retained in 2009 Number of customers Retained in 2010 Percent (%) Retained in 2010 .... 2008 2009 2010 2011 2012 Total The table would go out to 2012 in the headers. I'm just saving space. It tells you to find the total number of customers you had in your starting year. To do this, I used this query since our starting year is 2008: select YEAR(OrderDate) as 'Year', COUNT(distinct(billemail)) as Customers from dbo.tblOrder where OrderDate >= '2008-01-01' and OrderDate <= '2008-12-31' group by YEAR(OrderDate) At the moment we just differentiate our customers by email address. Then you have to search for the same names of customers who purchased again in later years (ours are 2009, 10, 11, and 12). I came up with this. It should find people who purchased in both 2008 and 2009. SELECT YEAR(OrderDate) as 'Year',COUNT(distinct(billemail)) as Customers FROM dbo.tblOrder o with (nolock) WHERE o.BillEmail IN (SELECT DISTINCT o1.BillEmail FROM dbo.tblOrder o1 with (nolock) WHERE o1.OrderDate BETWEEN '2008-1-1' AND '2009-1-1') AND o.BillEmail IN (SELECT DISTINCT o2.BillEmail FROM dbo.tblOrder o2 with (nolock) WHERE o2.OrderDate BETWEEN '2009-1-1' AND '2010-1-1') --AND o.OrderDate BETWEEN '2008-1-1' AND '2013-1-1' AND o.BillEmail NOT LIKE '%@halloweencostumes.com' AND o.BillEmail NOT LIKE '' GROUP BY YEAR(OrderDate) So I'm just finding the customers who purchased in both those years. And then I'm doing an independent query to find those who purchased in 2008 and 2010, then 08 and 11, and then 08 and 12. This one finds 2008 and 2010 purchasers: SELECT YEAR(OrderDate) as 'Year',COUNT(distinct(billemail)) as Customers FROM dbo.tblOrder o with (nolock) WHERE o.BillEmail IN (SELECT DISTINCT o1.BillEmail FROM dbo.tblOrder o1 with (nolock) WHERE o1.OrderDate BETWEEN '2008-1-1' AND '2009-1-1') AND o.BillEmail IN (SELECT DISTINCT o2.BillEmail FROM dbo.tblOrder o2 with (nolock) WHERE o2.OrderDate BETWEEN '2010-1-1' AND '2011-1-1') --AND o.OrderDate BETWEEN '2008-1-1' AND '2013-1-1' AND o.BillEmail NOT LIKE '%@halloweencostumes.com' AND o.BillEmail NOT LIKE '' GROUP BY YEAR(OrderDate) So you see I have a different query for each year comparison. They're all unrelated. So in the end I'm just finding people who bought in 2008 and 2009, and then a potentially different group that bought in 2008 and 2010, and so on. For this to be accurate, do I have to use the same grouping of 2008 buyers each time? So they bought in 2009 and 2010 and 2011, and 2012? This is where I'm worried and not sure how to proceed or even find such data. Any advice would be appreciated! Thanks!

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31  | Next Page >