Search Results

Search found 4373 results on 175 pages for 'historical debugging'.

Page 94/175 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Tuning Red Gate: #1 of Many

    - by Grant Fritchey
    Everyone runs into performance issues at some point. Same thing goes for Red Gate software. Some of our internal systems were running into some serious bottlenecks. It just so happens that we have this nice little SQL Server monitoring tool. What if I were to, oh, I don't know, use the monitoring tool to identify the bottlenecks, figure out the causes and then apply a fix (where possible) and then start the whole thing all over again? Just a crazy thought. OK, I was asked to. This is my first time looking through these servers, so here's how I'd go about using SQL Monitor to get a quick health check, sort of like checking the vitals on a patient. First time opening up our internal SQL Monitor instance and I was greeted with this: Oh my. Maybe I need to get our internal guys to read my blog. Anyway, I know that there are two servers where most of the load is. I'll drill down on the first. I'm selecting the server, not the instance, by clicking on the server name. That opens up the Global Overview page for the server. The information here much more applicable to the "oh my gosh, I have a problem now" type of monitoring. But, looking at this, I am seeing something immediately. There are four(4) drives on the system. The C:\ has an average read time of 16.9ms, more than double the others. Is that a problem? Not sure, but it's something I'll look at. It's write time is higher too. I'll keep drilling down, first, to the unclosed alerts on the server. Now things get interesting. SQL Monitor has a number of different types of alerts, some related to error states, others to service status, and then some related to performance. Guess what I'm seeing a bunch of right here: Long running queries and long job durations. If you check the dates, they're all recent, within the last 24 hours. If they had just been old, uncleared alerts, I wouldn't be that concerned. But with all these, all performance related, and all in the last 24 hours, yeah, I'm concerned. At this point, I could just start responding to the Alerts. If I click on one of the the Long-running query alerts, I'll get all kinds of cool data that can help me determine why the query ran long. But, I'm not in a reactive mode here yet. I'm still gathering data, trying to understand how the server works. I have the information that we're generating a lot of performance alerts, let's sock that away for the moment. Instead, I'm going to back up and look at the Global Overview for the SQL Instance. It shows all the databases on the server and their status. Then it shows a number of basic metrics about the SQL Server instance, again for that "what's happening now" view or things. Then, down at the bottom, there is the Top 10 expensive queries list: This is great stuff. And no, not because I can see the top queries for the last 5 minutes, but because I can adjust that out 3 days. Now I can see where some serious pain is occurring over the last few days. Databases have been blocked out to protect the guilty. That's it for the moment. I have enough knowledge of what's going on in the system that I can start to try to figure out why the system is running slowly. But, I want to look a little more at some historical data, to understand better how this server is behaving. More next time.

    Read the article

  • Windows Azure Recipe: Social Web / Big Media

    - by Clint Edmonson
    With the rise of social media there’s been an explosion of special interest media web sites on the web. From athletics to board games to funny animal behaviors, you can bet there’s a group of people somewhere on the web talking about it. Social media sites allow us to interact, share experiences, and bond with like minded enthusiasts around the globe. And through the power of software, we can follow trends in these unique domains in real time. Drivers Reach Scalability Media hosting Global distribution Solution Here’s a sketch of how a social media application might be built out on Windows Azure: Ingredients Traffic Manager (optional) – can be used to provide hosting and load balancing across different instances and/or data centers. Perfect if the solution needs to be delivered to different cultures or regions around the world. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model has extensibility points to hook into other identity providers as well. Web Role – hosts the core of the web application and presents a central social hub users. Database – used to store core operational, functional, and workflow data for the solution’s web services. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Tables (optional) – for semi-structured data streams that don’t need relational integrity such as conversations, comments, or activity streams, tables provide a faster and more flexible way to store this kind of historical data. Blobs (optional) – users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

  • Screen Aspect Ratio

    - by Bill Evjen
    Jeffrey Dean, Pixar Aspect Ratio is very important to home video. What is aspect ratio – the ratio from the height to the width 2.35:1 The image is 2.35 times wide as it is high Pixar uses this for half of our movies This is called a widescreen image When modified to fit your television screen They cut this to fit the box of your screen When a comparison is made huge chunks of picture is missing It is harder to find what is going on when these pieces are missing The whole is greater than the pieces themselves. If you are missing pieces – you are missing the movie The soul and the mood is in the film shots. Cutting it to fit a screen, you are losing 30% of the movie Why different aspect ratios? Film before the 1950s 1.33:1 Academy Standard There were all aspects of images though. There was no standard. Thomas Edison developed projecting images onto a wall/screen He didn’t patent it as he saw no value in it. Then 1.37:1 came about to add a strip of sound This is the same size as a 35mm film Around 1952 – TV comes along NTSC Television followed the Academy Standard (4x3) Once TV came out, movie theater attendance plummets So Film brought forth color to combat this. Also early 3D Also Widescreen was brought forth. Cinema-Scope Studios at the time made movies bigger and bigger There was a Napoleon movie that was actually 4x1 … really wide. 1.85:1 Academy Flat 2.35:1 Anamorphic Scope (aka Panavision/Cinemascope) Almost all movies are made in these two aspect ratios Pixar has done half in one and half in the other Why choose one over the other? Artist choice It is part of the story the director wants to tell Can we preserve the story outside of the theaters? TVs before 1998 – they were very square Now TVs are very wide Historical options Toy Story released as it was and people cut it in a way that wasn’t liked by the studio Pan and Scan is another option Cut and then scan left or right depending on where the action is Frame Height Pixar can go back and animate more picture to account for the bottom/top bars. You end up with more sky and more ground The characters seem to get lost in the picture You lose what the director original intended Re-staging For animated movies, you can move characters around – restage the scene. It is a new completely different version of the film This is the best possible option that Pixar came up with They have stopped doing this really as the demand as pretty much dropped off Why not 1.33 today? There has been an evolution of taste and demands. VHS is a linear item The focus is about portability and not about quality Most was pan and scan and the quality was so bad – but people didn’t notice DVD was introduced in 1996 You could have more content – two versions of the film You could have the widescreen version and the 1.33 version People realized that they are seeing more of the movie with the widescreen High Def Televisions (16x9 monitors) This was introduced in 2005 Blu-ray Disc was introduced in 2006 This is all widescreen You cannot find a square TV anymore TVs are roughly 1.85:1 aspect ratio There is a change in demand Users are used to black bars and are used to widescreen Users are educated now What’s next for in-flight entertainment? High Def IFE Personal Electronic Devices 3D inflight

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • Blink-Data vs Instinct?

    - by Samantha.Y. Ma
    In his landmark bestseller Blink, well-known author and journalist Malcolm Gladwell explores how human beings everyday make seemingly instantaneous choices --in the blink of an eye--and how we “think without thinking.”  These situations actually aren’t as simple as they seem, he postulates; and throughout the book, Gladwell seeks answers to questions such as: 1.    What makes some people good at thinking on their feet and making quick spontaneous decisions?2.    Why do some people follow their instincts and win, while others consistently seem to stumble into error?3.    Why are some of the best decisions often those that are difficult to explain to others?In Blink, Gladwell introduces us to the psychologist who has learned to predict whether a marriage will last, based on a few minutes of observing a couple; the tennis coach who knows when a player will double-fault before the racket even makes contact with the ball; the antiquities experts who recognize a fake at a glance. Ultimately, Blink reveals that great decision makers aren't those who spend the most time deliberating or analyzing information, but those who focus on key factors among an overwhelming number of variables-- i.e., those who have perfected the art of "thin-slicing.” In Data vs. Instinct: Perfecting Global Sales Performance, a new report sponsored by Oracle, the Economist Intelligence Unit (EIU) explores the roles data and instinct play in decision-making by sales managers and discusses how sales executives can increase sales performance through more effective  territory planning and incentive/compensation strategies.If you are a sales executive, ask yourself this:  “Do you rely on knowledge (data) when you plan out your sales strategy?  If you rely on data, how do you ensure that your data sources are reliable, up-to-date, and complete?  With the emergence of social media and the proliferation of both structured and unstructured data, how do you know that you are applying your information/data correctly and in-context?  Three key findings in the report are:•    Six out of ten executives say they rely more on data than instinct to drive decisions. •    Nearly one half (48 percent) of incentive compensation plans do not achieve the desired results. •    Senior sales executives rely more on current and historical data than on forecast data. Strikingly similar to what Gladwell concludes in Blink, the report’s authors succinctly sum up their findings: "The best outcome is a combination of timely information, insightful predictions, and support data."Applying this insight is crucial to creating a sound sales plan that drives alignment and results.  In the area of sales performance management, “territory programs and incentive compensation continue to present particularly complex challenges in an increasingly globalized market," say the report’s authors. "It behooves companies to get a better handle on translating that data into actionable and effective plans." To help solve this challenge, CRM Oracle Fusion integrates forecasting, quotas, compensation, and territories into a single system.   For example, Oracle Fusion CRM provides a natural integration between territories, which define the sales targets (e.g., collection of accounts) for the sales force, and quotas, which quantify the sales targets. In fact, territory hierarchy is a core analytic dimension to slice and dice sales results, using sales analytics and alerts to help you identify where problems are occurring. This makes territoriesStart tapping into both data and instinct effectively today with Oracle Fusion CRM.   Here is a short video to provide you with a snapshot of how it can help you optimize your sales performance.  

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

  • Internet of Things Becoming Reality

    - by kristin.jellison
    The Internet of Things is not just on the radar—it’s becoming a reality. A globally connected continuum of devices and objects will unleash untold possibilities for businesses and the people they touch. But the “things” are only a small part of a much larger, integrated architecture. A great example of this comes from the healthcare industry. Imagine an expectant mother who needs to watch her blood pressure. She lives in a mountain village 100 miles away from medical attention. Luckily, she can use a small “wearable” device to monitor her status and wirelessly transmit the information to a healthcare hub in her village. Now, say the healthcare hub identifies that the expectant mother’s blood pressure is dangerously high. It sends a real-time alert to the patient’s wearable device, advising her to contact her doctor. It also pushes an alert with the patient’s historical data to the doctor’s tablet PC. He inserts a smart security card into the tablet to verify his identity. This ensures that only the right people have access to the patient’s data. Then, comparing the new data with the patient’s medical history, the doctor decides she needs urgent medical attention. GPS tracking devices on ambulances in the field identify and dispatch the closest one available. An alert also goes to the closest hospital with the necessary facilities. It sends real-time information on her condition directly from the ambulance. So when she arrives, they already have a treatment plan in place to ensure she gets the right care. The Internet of Things makes a huge difference for the patient. She receives personalized and responsive healthcare. But this technology also helps the businesses involved. The healthcare provider achieves a competitive advantage in its services. The hospital benefits from cost savings through more accurate treatment and better application of services. All of this, in turn, translates into savings on insurance claims. This is an ideal scenario for the Internet of Things—when all the devices integrate easily and when the relevant organizations have all the right systems in place. But in reality, that can be difficult to achieve. Core design principles are required to make the whole system work. Open standards allow these systems to talk to each other. Integrated security protects personal, financial, commercial and regulatory information. A reliable and highly available systems infrastructure is necessary to keep these systems running 24/7. If this system were just made up of separate components, it would be prohibitively complex and expensive for almost any organization. The solution is integration, and Oracle is leading the way. We’re developing converged solutions, not just from device to datacenter, but across devices, utilizing the Java platform, and through data acquisition and management, integration, analytics, security and decision-making. The Internet of Things (IoT) requires the predictable action and interaction of a potentially endless number of components. It’s in that convergence that the true value of the Internet of Things emerges. Partners who take the comprehensive view and choose to engage with the Internet of Things as a fully integrated platform stand to gain the most from the Internet of Things’ many opportunities. To discover what else Oracle is doing to connect the world, read about Oracle’s Internet of Things Platform. Learn how you can get involved as a partner by checking out the Oracle Java Knowledge Zone. Best regards, David Hicks

    Read the article

  • Should I manage authentication on my own if the alternative is very low in usability and I am already managing roles?

    - by rumtscho
    As a small in-house dev department, we only have experience with developing applications for our intranet. We use the existing Active Directory for user account management. It contains the accounts of all company employees and many (but not all) of the business partners we have a cooperation with. Now, the top management wants a technology exchange application, and I am the lead dev on the new project. Basically, it is a database containing our know-how, with a web frontend. Our employees, our cooperating business partners, and people who wish to become our cooperating business partners should have access to it and see what technologies we have, so they can trade for them with the department which owns them. The technologies are not patented, but very valuable to competitors, so the department bosses are paranoid about somebody unauthorized gaining access to their technology description. This constraint necessitates a nightmarishly complicated multi-dimensional RBAC-hybrid model. As the Active Directory doesn't even contain all the information needed to infer the roles I use, I will have to manage roles plus per-technology per-user granted access exceptions within my system. The current plan is to use Active Directory for authentication. This will result in a multi-hour registration process for our business partners where the database owner has to manually create logins in our Active Directory and send them credentials. If I manage the logins in my own system, we could improve the usability a lot, for example by letting people have an active (but unprivileged) account as soon as they register. It seems to me that, after I am having a users table in the DB anyway (and managing ugly details like storing historical user IDs so that recycled user IDs within the Active Directory don't unexpectedly get rights to view someone's technologies), the additional complexity from implementing authentication functionality will be minimal. Therefore, I am starting to lean towards doing my own user login management and forgetting the AD altogether. On the other hand, I see some reasons to stay with Active Directory. First, the conventional wisdom I have heard from experienced programmers is to not do your own user management if you can avoid it. Second, we have code I can reuse for connection to the active directory, while I would have to code the authentication if done in-system (and my boss has clearly stated that getting the project delivered on time has much higher priority than delivering a system with high usability). Third, I am not a very experienced developer (this is my first lead position) and have never done user management before, so I am afraid that I am overlooking some important reasons to use the AD, or that I am underestimating the amount of work left to do my own authentication. I would like to know if there are more reasons to go with the AD authentication mechanism. Specifically, if I want to do my own authentication, what would I have to implement besides a secure connection for the login screen (which I would need anyway even if I am only transporting the pw to the AD), lookup of a password hash and a mechanism for password recovery (which will probably include manual identity verification, so no need for complex mTAN-like solutions)? And, if you have experience with such security-critical systems, which one would you use and why?

    Read the article

  • Modernizr Rocks HTML5

    - by Laila
    HTML5 is a moving target.  At the moment, we don't know what will be in future versions.  In most circumstances, this really matters to the developer. When you're using Adobe Air, you can be reasonably sure what works, what is there, and what isn't, since you have a version of the browser built-in. With Metro, you can assume that you're going to be using at least IE 10.   If, however,  you are using HTML5 in a web application, then you are going to rely heavily on Feature Detection.  Feature-Detection is a collection of techniques that tell you, via JavaScript, whether the current browser has this feature natively implemented or not Feature Detection isn't just there for the esoteric stuff such as  Geo-location,  progress bars,  <canvas> support,  the new <input> types, Audio, Video, web workers or storage, but is required even for semantic markup, since old browsers make a pigs ear out of rendering this.  Feature detection can't rely just on reading the browser version and inferring from that what works. Instead, you must use JavaScript to check that an HTML5 feature is there before using it.  The problem with relying on the user-agent is that it takes a lot of historical data  to work out what version does what, and, anyway, the user-agent can be, and sometimes is, spoofed. The open-source library Modernizr  is just about the most essential  JavaScript library for anyone using HTML5, because it provides APIs to test for most of the CSS3 and HTML5 features before you use them, and is intelligent enough to alter semantic markup into 'legacy' 'markup  using shims  on page-load  for old browsers. It also allows you to check what video Codecs are installed for playing video. It also provides media queries  and conditional resource-loading (formerly YepNope.js.).  Generally, Modernizr gives you the choice of what you do about browsers that don't support the feature that you want. Often, the best choice is graceful degradation, but the resource-loading feature allows you to dynamically load JavaScript Shims to replace the standard API for missing or defective HTML5 functionality, called 'PolyFills'.  As the Modernizr site says 'Yes, not only can you use HTML5 today, but you can use it in the past, too!' The evolutionary progress of HTML5  requires a more defensive style of JavaScript programming where the programmer adopts a mindset of fearing the worst ( IE 6)  rather than assuming the best, whilst exploiting as many of the new HTML features as possible for the requirements of the site or HTML application.  Why would anyone want the distraction of developing their own techniques to do this when  Modernizr exists to do this for you? Laila

    Read the article

  • Blackberry stopwatch implementation

    - by Michaela
    I'm trying to write a blackberry app that is basically a stopwatch, and displays lap times. First, I'm not sure I'm implementing the stopwatch functionality in the most optimal way. I have a LabelField (_myLabel) that displays the 'clock' - starting at 00:00. Then you hit the start button and every second the _myLabel field gets updated with how many seconds have past since the last update (should only ever increment by 1, but sometimes there is a delay and it will skip a number). I just can't think of a different way to do it - and I am new to GUI development and threads so I guess that's why. EDIT: Here is what calls the stopwatch: _timer = new Timer(); _timer.schedule(new MyTimerTask(), 250, 250); And here is the TimerTask: class MyTimerTask extends TimerTask { long currentTime; long startTime = System.currentTimeMillis(); public void run() { synchronized (Application.getEventLock()) { currentTime = System.currentTimeMillis(); long diff = currentTime - startTime; long min = diff / 60000; long sec = (diff % 60000) / 1000; String minStr = new Long(min).toString(); String secStr = new Long(sec).toString(); if (min < 10) minStr = "0" + minStr; if (sec < 10) secStr = "0" + secStr; _myLabel.setText(minStr + ":" + secStr); timerDisplay.deleteAll(); timerDisplay.add(_timerLabel); } } } Anyway when you stop the stopwatch it updates a historical table of lap time data. When this list gets long, the timer starts to degrade. If you try to scroll, then it gets really bad. Is there a better way to implement my stopwatch?

    Read the article

  • .NET projects build automation with NAnt/MSBuild + SVN

    - by petr k.
    Hi everyone, for quite a while now, I've been trying to figure out how to setup an automated build process at our shop. I've read many posts and guides on this matter and none of them really fits my specifics needs. My SVN repository is laid out as follows \projects \projectA (a product) \tags \1.0.0.1 \1.0.0.2 ... \trunk \src \proj1 (a VS C# project) \proj2 \documentation Then I have a network share, with a folder for each project (product), which in turn contains the binaries, written documentation and the generated API documentation (via NDoc - each project may have an .ndoc file in the repository) for every historical version (from the tags SVN folder) and for the latest version as well (from the trunk). Basically, what I want to do in a scheduled batch build are these steps: examine the project's SVN folder and identify tags not present in the network share for each of these tags check out the tag folder build (with Release config) copy the resulting binaries to the network share search for .ndoc files generate CHM files via NDoc copy the resulting CHM files to the network share do the same as in 2., but for the HEAD revision of trunk Now, the trouble is, I have no idea where to start. I do not keep .sln files in the repository, but I am able to replace these with MSBuild files which in turn build the C# projects belonging to the specific product. I guess the most troubling part is the examination of the repository for tags which have not been processed yet - i.e. searching the tags and comparing them to a project's directory structure on the network share. I have no idea how to do that in any of the build tools (NAnt, MSBuild). Could you please provide me with some pointers on how to approach this task as a whole and in detail as well? I do not care if I use NAnt, MSBuild, or both. I am aware that this might be rather complex, but every idea and NAnt/MSBuild snippet will be a great help. Thanks in advance.

    Read the article

  • How can I convert this PHP script to Ruby? (build tree from tabbed string)

    - by Jon Sunrays
    I found this script below online, and I'm wondering how I can do the same thing with a Ruby on Rails setup. So, first off, I ran this command: rails g model Node node_id:integer title:string Given this set up, how can I make a tree from a tabbed string like the following? <?php // Make sure to have "Academia" be root node with nodeID of 1 $data = " Social sciences Anthropology Biological anthropology Forensic anthropology Gene-culture coevolution Human behavioral ecology Human evolution Medical anthropology Paleoanthropology Population genetics Primatology Anthropological linguistics Synchronic linguistics (or Descriptive linguistics) Diachronic linguistics (or Historical linguistics) Ethnolinguistics Sociolinguistics Cultural anthropology Anthropology of religion Economic anthropology Ethnography Ethnohistory Ethnology Ethnomusicology Folklore Mythology Political anthropology Psychological anthropology Archaeology ...(goes on for a long time) "; //echo "Checkpoint 2\n"; $lines = preg_split("/\n/", $data); $parentids = array(0 => null); $db = new PDO("host", 'username', 'pass'); $sql = 'INSERT INTO `TreeNode` SET ParentID = ?, Title = ?'; $stmt = $db->prepare($sql); foreach ($lines as $line) { if (!preg_match('/^([\s]*)(.*)$/', $line, $m)) { continue; } $spaces = strlen($m[1]); //$level = intval($spaces / 4); //assumes four spaces per indent $level = strlen($m[1]); // if data is tab indented $title = $m[2]; $parentid = ($level > 0 ? $parentids[$level - 1] : 1); //All "roots" are children of "Academia" which has an ID of "1"; $rv = $stmt->execute(array($parentid, $title)); $parentids[$level] = $db->lastInsertId(); echo "inserted $parentid - " . $parentid . " title: " . $title . "\n"; } ?>

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • SSIS Lookup with Lookup Component Vs Script Component.

    - by Nev_Rahd
    Hello, I need to load Dimensions from EDW Tables (which does maintain historical records) and is of type Key-Value-Parameter. My scenario is ok if got a record in EDW as below Key1 Key2 Code Value EffectiveDate EndDate CurrentFlag 100 555 01 AAA 2010-01-01 11.00.00 9999-12-31 Y 100 555 02 BBB 2010-01-01 11.00.00 9999-12-31 Y This need to be loaded into DM by pivoting it as key1 and key2 combinations makes Natural key for DM SK NK 01 02 EffectiveDate EndDate CurrentFlag 1 100-555 AAA BBB 2010-01-01 11.00.00 9999-12-31 Y My ssis package does this all good pivoting... looking up the incoming NK in DIM.. if new will insert .. else with further lookup with effective date and determine if the incoming for same natural key got any new (change) in attribute.. if so updates the current record byy setting its end date and insert the new one with new attribute value and pulling the recent records values for other attributes. My problem is if the same natural key comes twice with same attribute in single extract my first lookup which on natural key .. will let both records pass and try to insert.. where its fails. If i get distinct records on NK the second is not picked and need to run package again. So my question how can i configure lookup or alernative way to handle this scenario when same NK comes twice in single extract, would be able to insert first record if not exists in Dim table and for second one should be able to updated with the changes with reference to one inserted above. Not sure this makes sense what am trying to explain. Will attached the screenshot once back to work desk (on monday). Thanks

    Read the article

  • How to get rid of void-pointers.

    - by Patrick
    I inherited a big application that was originally written in C (but in the mean time a lot of C++ was also added to it). Because of historical reasons, the application contains a lot of void-pointers. Before you start to choke, let me explain why this was done. The application contains many different data structures, but they are stored in 'generic' containers. Nowadays I would use templated STL containers for it, or I would give all data structures a common base class, so that the container can store pointers to the base class, but in the [good?] old C days, the only solution was to cast the struct-pointer to a void-pointer. Additionally, there is a lot of code that works on these void-pointers, and uses very strange C constructions to emulate polymorphism in C. I am now reworking the application, and trying to get rid of the void-pointers. Adding a common base-class to all the data structures isn't that hard (few days of work), but the problem is that the code is full of constructions like shown below. This is an example of how data is stored: void storeData (int datatype, void *data); // function prototype ... Customer *myCustomer = ...; storeData (TYPE_CUSTOMER, myCustomer); This is an example of how data is fetched again: Customer *myCustomer = (Customer *) fetchData (int datatype, char *key); I actually want to replace all the void-pointers with some smart-pointer (reference-counted), but I can't find a trick to automate (or at least) help me to get rid of all the casts to and from void-pointers. Any tips on how to find, replace, or interact in any possible way with these conversions?

    Read the article

  • Avoiding GC thrashing with WSE 3.0 MTOM service

    - by Leon Breedt
    For historical reasons, I have some WSE 3.0 web services that I cannot upgrade to WCF on the server side yet (it is also a substantial amount of work to do so). These web services are being used for file transfers from client to server, using MTOM encoding. This can also not be changed in the short term, for reasons of compatibility. Secondly, they are being called from both Java and .NET, and therefore need to be cross-platform, hence MTOM. How it works is that an "upload" WebMethod is called by the client, sending up a chunk of data at a time, since files being transferred could potentially be gigabytes in size. However, due to not being able to control parts of the stack before the WebMethod is invoked, I cannot control the memory usage patterns of the web service. The problem I am running into is for file sizes from 50MB or so onwards, performance is absolutely killed because of GC, since it appears that WSE 3.0 buffers each chunk received from the client in a new byte[] array, and by the time we've done 50MB we're spending 20-30% of time doing GC. I've played with various chunk sizes, from 16k to 2MB, with no real great difference in results. Smaller chunks are killed by the latency involved with round-tripping, and larger chunks just postpone the slowdown until GC kicks in. Any bright ideas on cutting down on the garbage created by WSE? Can I plug into the pipeline somehow and jury-rig something that has access to the client's request stream and streams it to the WebMethod? I'm aware that it is possible to "stream" responses to the client using WSE (albeit very ugly), but this problem is with requests from the client.

    Read the article

  • git doesn't show where code was removed.

    - by Andrew Myers
    So I was tasked at replacing some dummy code that our project requires for historical compatibility reasons but has mysteriously dropped out sometime since the last release. Since disappearing code makes me nervous about what else might have gone missing but un-noticed I've been digging through the logs trying to find in what commit this handful of lines was removed. I've tried a number of things including "git log -S'add-visit-resource-pcf'", git blame, and even git bisect with a script that simply checks for the existence of the line but have been unable to pinpoint exactly where these lines were removed. I find this very perplexing, particularly since the last log entry (obtained by the above command) before my re-introduction of this code was someone else adding the code as well. commit 0b0556fa87ff80d0ffcc2b451cca1581289bbc3c Author: Andrew Date: Thu May 13 10:55:32 2010 -0400 Re-introduced add-visit-resource-pcf, see PR-65034. diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index f8e692d..a6f8d38 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -115,6 +115,7 @@ #:add-to-current-resource-pcf #:add-user-package-nickname #:add-value-criteria + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types commit 9fb10e25572c537076284a248be1fbf757c1a6e1 Author: Bob Date: Sun Jan 17 18:35:16 2010 -0500 update-defpackage for Spike 33.1 Delivery diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index 983666d..47f1a9a 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -118,6 +118,7 @@ #:add-user-package-nickname #:add-value-criteria #:add-vars-from-proposal + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types This is for one of our package definition files, but the relevant source file reflects something similar. Does anyone know what could be going on here and how I could find the information I want? It's not really that important but this kind of things makes me a bit nervous.

    Read the article

  • Adaptive user interface/environment algorithm

    - by WowtaH
    Hi all, I'm working on an information system (in C#) that (while my users use it) gathers statistical data on what pieces of information (tables & records) each user is requesting the most, and what parts of the interface he/she uses most. I'm using this statistical data to make the application adaptive to the user's needs, both in the way the interface presents itself (eg: tab/pane-ordering) as in the way of using the frequently viewed information to (eg:) show higher in search results/suggestion-lists. What i'm looking for is an algorithm/formula to determine the current 'hotness'/relevance of these objects for a specific user. A simple 'hitcounter' for each object won't be sufficient because the user might view some information quite frequently for a period of time, and then moving on to the next, making the old information less relevant. So i think my algorithm also needs some sort of sliding/historical principle to account for the changing popularity of the objects in the application over time. So, the question is: Does anybody have some sort of algorithm that accounts for that 'popularity over time' ? Preferably with some explanation on the parameters :) Thanks! PS I've looked at other posts like http://stackoverflow.com/questions/32397/popularity-algorithm but i could't quite port it to my specific case. Any help is appreciated.

    Read the article

  • jQuery tooltip: Trouble with remove()

    - by Rosarch
    I'm using a jQuery tooltip plugin. I have HTML like this: <li class="term ui-droppable"> <strong>Fall 2011</strong> <li class="course ui-draggable">Biological Statistics I<a class="remove-course-button" href="">[X]</a></li> <div class="term-meta-data"> <p class="total-credits too-few-credits">Total credits: 3</p> <p class="median-GPA low-GPA">Median Historical GPA: 2.00</p> </div> </li> I want to remove the .course element. So, I attach a click handler to the <a>: function _addDeleteButton(course, term) { var delete_button = $('<a href="" class="remove-course-button" title="Remove this course">[X]</a>'); course.append(delete_button); $(delete_button).click(function() { course.remove(); return false; }).tooltip(); } This all works fine, in terms of attaching the click handler. However, when course.remove() is called, Firebug reports an error in tooltip.js: Line 282 tsettings is null if ((!IE || !$.fn.bgiframe) && tsettings.fade) { What am I doing wrong? If the link has a tooltip attached, do I need to remove it specially? UPDATE: Removing .tooltip() solve the problem. I'd like to keep it in, but that makes me suspect that my use of .tooltip() is incorrect here.

    Read the article

  • Good PHP / MYSQL hashing solution for large number of text values

    - by Dave
    Short descriptio: Need hashing algorithm solution in php for large number of text values. Long description. PRODUCT_OWNER_TABLE serial_number (auto_inc), product_name, owner_id OWNER_TABLE owner_id (auto_inc), owener_name I need to maintain a database of 200000 unique products and their owners (AND all subsequent changes to ownership). Each product has one owner, but an owner may have MANY different products. Owner names are "Adam Smith", "John Reeves", etc, just text values (quite likely to be unicode as well). I want to optimize the database design, so what i was thinking was, every week when i run this script, it fetchs the owner of a proudct, then checks against a table i suppose similar to PRODUCT_OWNER_TABLE, fetching the owner_id. It then looks up owner_id in OWNER_TABLE. If it matches, then its the same, so it moves on. The problem is when its different... To optimize the database, i think i should be checking against the other "owner_name" entries in OWNER_TABLE to see if that value exists there. If it does, then i should use that owner_id. If it doesnt, then i should add another entry. Note that there is nothing special about the "name". as long as i maintain the correct linkagaes AND make the OWNER_TABLE "read-only, append-new" type table - I should be able create a historical archive of ownership. I need to do this check for 200000 entries, with i dont know how many unique owner names (~50000?). I think i need a hashing solution - the OWNER_TABLE wont be sorted, so search algos wont be optimal. programming language is PHP. database is MYSQL.

    Read the article

  • ProtoInclude for fields ?

    - by Big
    I have a simple object [ProtoContract] public class DataChangedEventArgs<T> : EventArgs { private readonly object key; private readonly T data; private readonly DataChangeType changeType; ///<summary> /// Key to identify the data item ///</summary> public object Key { get { return key; } } [ProtoMember(2, IsRequired = true)] public T Data { get { return data; } } [ProtoMember(3, IsRequired = true)] public DataChangeType ChangeType { get { return changeType; } } and I have a problem with the key. Its type is object, but it can be either int, long or string. I would intuitively use a ProtoInclude attribute to say "expect these types" but unfortunately they are class only attribute. Does anybody has any idea how I could work around this ? For background, the public object Key is here for historical reasons (and all over the place) so I would very much like to avoid the mother of all refactorings ;-) Any chance I could get this to Serialize, even force it to Serialize as a string ?

    Read the article

  • MySql: Query multiple identical dynamic tables.

    - by JYelton
    I have a database with 500+ tables, each with identical structure, that contain historical data from sensors. I am trying to come up with a query that will locate, for example, all instances where sensor n exceeds x. The problem is that the tables are dynamic, the query must be able to dynamically obtain the list of tables. I can query information_schema.tables to get a list of the tables, like so: SELECT table_name FROM information_schema.tables WHERE table_schema = 'database_name'; I can use this to create a loop in the program and then query the database repeatedly, however it seems like there should be a way to have MySql do the multiple table search. I have not been able to make a stored procedure that works, but the examples I can find are generally for searching for a string in any column. I want to specifically find data in a specific column that exists in all tables. I admit I do not understand how to properly use stored procedures nor if they are the appropriate solution to this problem. An example query inside the loop would be: SELECT device_name, sensor_value FROM device_table WHERE sensor_value > 10; Trying the following does not work: SELECT device_name, sensor_value FROM ( SELECT table_name FROM information_schema.tables WHERE table_schema = 'database_name' ) WHERE sensor_value > 10; It results in an error: "Every derived table must have its own alias." The goal is to have a list of all devices that have had a given sensor value occur anywhere in their log (table). Ultimately, should I just loop in my program once I've obtained a list of tables, or is there a query structure that would be more efficient?

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >