Search Results

Search found 3392 results on 136 pages for 'average joe'.

Page 83/136 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • What kind of knowledge do you need to invent a new programming language?

    - by systempuntoout
    I just finished to read "Coders at works", a brilliant book by Peter Seibel with 15 interviews to some of the most interesting computer programmers alive today. Well, many of the interviewees have (co)invented\implemented a new programming language. Some examples: Joe Armstrong: Inventor of Erlang L. Peter Deutsch: implementer of Smalltalk-80 Brendan Eich: Inventor of JavaScript Dan Ingalls: Smalltalk implementor and designer Simon Peyton Jones: Coinventor of Haskell Guy Steele: Coinventor of Scheme Is out of any doubt that their minds have something special and unreachable, and i'm not crazy to think i will ever able to create a new language; i'm just interested in this topic. So, imagine a funny\grotesque scenario where your crazy boss one day will come to your desk to say "i want a new programming language with my name on it..take the time you need and do it", which is the right approach to studying this fascinating\intimidating\magic topic? What kind of knowledge do you need to model, design and implement a brand new programming language?

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • What kind of knowledge you need to invent a new programming language?

    - by systempuntoout
    I just finished to read "coders at works", a brilliant book by Peter Seibel with 15 interviews to some of the most interesting computer programmers alive today. Well, many of the interviewees have (co)invented\implemented a new programming language. For example: * Joe Armstrong: Inventor of Erlang * L. Peter Deutsch: implementer of Smalltalk-80 * Brendan Eich: Inventor of JavaScript * Dan Ingalls: Smalltalk implementor and designer * Simon Peyton Jones: Coinventor of Haskell * Guy Steele: Coinventor of Scheme Is out of any doubt that their minds have something special and unreachable, and i'm not crazy to think i will ever able to create a new language; i'm just interested in this topic. So, imagine a funny\grotesque scenario where your crazy boss one day will come to your desk to say "i want a new programming language with my name on it..take the time you need and do it", what will you start to study? What kind of knowledge do you need to model, design and implement a brand new programming language?

    Read the article

  • php regex expression to get title

    - by 55skidoo
    I'm trying to strip content titles out of the middle of text strings. Could I use regex to strip everything out of this string except for the title (in italics) in these strings? Or is there a better way? Joe User wrote a blog post called The 10 Best Regex Expressions in the category Regex. Jane User wrote a blog post called Regex is Hard! in the category TechProblems. I've tried to come up with a regex expression to cover this, but I think it might need two. The trick is that the text in bold is always the same, so you could search for that, like this: regex: delete everything before and including wrote a blog post called regex: delete in the category and everything after it.

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Using XPath to access comments a flat hierachy

    - by Sebastian
    I have a given XML document (structure can not be changed) and want to get the comments that are written above the nodes. The document looks like this: <!--Some comment here--> <attribute name="Title">Book A</attribute> <attribute name="Author"> <value>Joe Doe</value> <value>John Miller</value> </attribute> <!--Some comment here--> <attribute name="Code">1</attribute> So comments are optional, but if there is one, I want to get the comment above each attribute. Using /*/comment()[n] would give me comment n, but for n=2 I would naturally get the comment of the third attribute, so there is no connection between attributes and comments Any ideas? Thanks

    Read the article

  • Confused by Perl grep function

    - by titaniumdecoy
    I don't understand the last line of this function from Programming Perl 3e. Here's how you might write a function that does a kind of set intersection by returning a list of keys occurring in all the hashes passed to it: @common = inter( \%foo, \%bar, \%joe ); sub inter { my %seen; for my $href (@_) { while (my $k = each %$href) { $seen{$k}++; } } return grep { $seen{$_} == @_ } keys %seen; } I understand that %seen is a hash which maps each key to the number of times it was encountered in any of the hashes provided to the function.

    Read the article

  • Good reasons to migrate PHP libraries to namespaces

    - by Joseph Mastey
    I have a significant number of object libraries written for PHP 5.2.5, and I'm trying to weigh the benefits of retrofitting them for namespaces. I don't have any concerns about the server PHP version at the moment, since any relevant machines are under my control, so I'm not worried about backwards compatibility. As far as the structure of the libraries, I use the same convention as Zend Framework, (Library_Module_Class_Name e.g.) so I don't currently have any naming conflicts internal to the libraries. I'd anticipate moving the Library and Module parts of those classnames to namespaces. That said, if the code is already written, is there any good reason to move over to namespaces? Thanks, Joe

    Read the article

  • select random value from each type

    - by Joseph Mastey
    I have two tables, rating: +-----------+-----------+-------------+----------+ | rating_id | entity_id | rating_code | position | +-----------+-----------+-------------+----------+ | 1 | 1 | Quality | 0 | | 2 | 1 | Value | 0 | | 3 | 1 | Price | 0 | +-----------+-----------+-------------+----------+ And rating_option +-----------+-----------+------+-------+----------+ | option_id | rating_id | code | value | position | +-----------+-----------+------+-------+----------+ | 1 | 1 | 1 | 1 | 1 | | 2 | 1 | 2 | 2 | 2 | | 3 | 1 | 3 | 3 | 3 | | 4 | 1 | 4 | 4 | 4 | | 5 | 1 | 5 | 5 | 5 | | 6 | 2 | 1 | 1 | 1 | | 7 | 2 | 2 | 2 | 2 | | 8 | 2 | 3 | 3 | 3 | | 9 | 2 | 4 | 4 | 4 | | 10 | 2 | 5 | 5 | 5 | | 11 | 3 | 1 | 1 | 1 | | 12 | 3 | 2 | 2 | 2 | | 13 | 3 | 3 | 3 | 3 | | 14 | 3 | 4 | 4 | 4 | | 15 | 3 | 5 | 5 | 5 | +-----------+-----------+------+-------+----------+ I need a SQL query (not application level, must stay in the database) which will select a set of ratings randomly. A sample result would look like this, but would pick a random value for each rating_id on subsequent calls: +-----------+-----------+------+-------+----------+ | option_id | rating_id | code | value | position | +-----------+-----------+------+-------+----------+ | 1 | 1 | 1 | 1 | 1 | | 8 | 2 | 3 | 3 | 3 | | 15 | 3 | 5 | 5 | 5 | +-----------+-----------+------+-------+----------+ I'm totally stuck on the random part, and grouping by rating_id has been a crap shoot so far. Any MySQL ninjas want to take a stab? Thanks, Joe

    Read the article

  • Array.BinarySearch where a certain condition is met

    - by codymanix
    I have an array of a certain type. Now I want to find an entry where a certain condition is met. What is the preferred way to do this with the restriction that I don't want to create a temporary object to find, but instead I only want to give a search condition. MyClass[] myArray; // fill and sort array.. MyClass item = Array.BinarySearch(myArray, x=>x.Name=="Joe"); // is this possible? Maybe is it possible to use LINQ to solve it?

    Read the article

  • HTG Explains: Why Does Rebooting a Computer Fix So Many Problems?

    - by Chris Hoffman
    Ask a geek how to fix a problem you’ve having with your Windows computer and they’ll likely ask “Have you tried rebooting it?” This seems like a flippant response, but rebooting a computer can actually solve many problems. So what’s going on here? Why does resetting a device or restarting a program fix so many problems? And why don’t geeks try to identify and fix problems rather than use the blunt hammer of “reset it”? This Isn’t Just About Windows Bear in mind that this soltion isn’t just limited to Windows computers, but applies to all types of computing devices. You’ll find the advice “try resetting it” applied to wireless routers, iPads, Android phones, and more. This same advice even applies to software — is Firefox acting slow and consuming a lot of memory? Try closing it and reopening it! Some Problems Require a Restart To illustrate why rebooting can fix so many problems, let’s take a look at the ultimate software problem a Windows computer can face: Windows halts, showing a blue screen of death. The blue screen was caused by a low-level error, likely a problem with a hardware driver or a hardware malfunction. Windows reaches a state where it doesn’t know how to recover, so it halts, shows a blue-screen of death, gathers information about the problem, and automatically restarts the computer for you . This restart fixes the blue screen of death. Windows has gotten better at dealing with errors — for example, if your graphics driver crashes, Windows XP would have frozen. In Windows Vista and newer versions of Windows, the Windows desktop will lose its fancy graphical effects for a few moments before regaining them. Behind the scenes, Windows is restarting the malfunctioning graphics driver. But why doesn’t Windows simply fix the problem rather than restarting the driver or the computer itself?  Well, because it can’t — the code has encountered a problem and stopped working completely, so there’s no way for it to continue. By restarting, the code can start from square one and hopefully it won’t encounter the same problem again. Examples of Restarting Fixing Problems While certain problems require a complete restart because the operating system or a hardware driver has stopped working, not every problem does. Some problems may be fixable without a restart, though a restart may be the easiest option. Windows is Slow: Let’s say Windows is running very slowly. It’s possible that a misbehaving program is using 99% CPU and draining the computer’s resources. A geek could head to the task manager and look around, hoping to locate the misbehaving process an end it. If an average user encountered this same problem, they could simply reboot their computer to fix it rather than dig through their running processes. Firefox or Another Program is Using Too Much Memory: In the past, Firefox has been the poster child for memory leaks on average PCs. Over time, Firefox would often consume more and more memory, getting larger and larger and slowing down. Closing Firefox will cause it to relinquish all of its memory. When it starts again, it will start from a clean state without any leaked memory. This doesn’t just apply to Firefox, but applies to any software with memory leaks. Internet or Wi-Fi Network Problems: If you have a problem with your Wi-Fi or Internet connection, the software on your router or modem may have encountered a problem. Resetting the router — just by unplugging it from its power socket and then plugging it back in — is a common solution for connection problems. In all cases, a restart wipes away the current state of the software . Any code that’s stuck in a misbehaving state will be swept away, too. When you restart, the computer or device will bring the system up from scratch, restarting all the software from square one so it will work just as well as it was working before. “Soft Resets” vs. “Hard Resets” In the mobile device world, there are two types of “resets” you can perform. A “soft reset” is simply restarting a device normally — turning it off and then on again. A “hard reset” is resetting its software state back to its factory default state. When you think about it, both types of resets fix problems for a similar reason. For example, let’s say your Windows computer refuses to boot or becomes completely infected with malware. Simply restarting the computer won’t fix the problem, as the problem is with the files on the computer’s hard drive — it has corrupted files or malware that loads at startup on its hard drive. However, reinstalling Windows (performing a “Refresh or Reset your PC” operation in Windows 8 terms) will wipe away everything on the computer’s hard drive, restoring it to its formerly clean state. This is simpler than looking through the computer’s hard drive, trying to identify the exact reason for the problems or trying to ensure you’ve obliterated every last trace of malware. It’s much faster to simply start over from a known-good, clean state instead of trying to locate every possible problem and fix it. Ultimately, the answer is that “resetting a computer wipes away the current state of the software, including any problems that have developed, and allows it to start over from square one.” It’s easier and faster to start from a clean state than identify and fix any problems that may be occurring — in fact, in some cases, it may be impossible to fix problems without beginning from that clean state. Image Credit: Arria Belli on Flickr, DeclanTM on Flickr     

    Read the article

  • php / mysql pagination

    - by arrgggg
    Hi, I have a table with 58 records in mysql database. I was able to connect to my database and retrive all records and made 5 pages with links to view each pages using php script. webpage will look like this: name number john 1232343456 tony 9878768544 jack 3454562345 joe 1232343456 jane 2343454567 andy 2344560987 marcy 9873459876 sean 8374623534 mark 9898787675 nancy 8374650493 1 2 3 4 5 that's the first page of 58 records and those 5 numbers at bottom are links to each page that will display next 10 records. I got all that. but what I want to do is display the links in this way: 1-10 11-20 21-30 31-40 41-50 51-58 note: since i have 58 records, last link will display upto 58, instead of 60. Since I used the loop to create this link, depending on how many records i have, the link will change according to the number of records in my table. How can i do this? Thanks.

    Read the article

  • Lamp with mod_fastcgi

    - by Jonathan
    Hi! I am building a cgi application, and now I would like it to be like an application that stands and parses each connection, with this, I can have all session variables saved in memory instead of saving them to file(or anyother place) and loading them again on a new connection I am using lamp within a linux vmware but I can't seem to find how to install the module for it to work and what to change in the httpd.conf. I tried to compile the module, but I couldn't because my apache isn't a regular instalation, its a lamp already built one, and it seems that the mod needs the apache directory to be compiled. I saw some coding examples out there, so I guess is not that hard once its runing ok with Apache Can you help me with this please? Thanks, Joe

    Read the article

  • Intersection of two querysets in django

    - by unagimiyagi
    Hello, I can't do an AND on two querysets. As in, q1 & q2. I get the empty set and I do not know why. I have tested this with the simplest cases. I am using django 1.1.1 I have basically objects like this: item1 name="Joe" color = "blue" item2 name="Jim" color = "blue" color = "white" item3 name="John" color = "red" color = "white" Is there something weird about having a many-to-many relationship or what am I missing? queryset1 = Item.objects.filter(color="blue") this gives (item1, item2) queryset2 = Item.objects.filter(color="white") this gives (item2, item3) queryset1 & queryset2 gives me the empty set [] The OR operator works fine (I'm using "|" ) Why is this so?

    Read the article

  • How do I make calls to a WCF service with jquery ajax from an SSL-secured page?

    - by NovaJoe
    I have a WCF service returning JSON to jQuery ajax calls and presenting the results on an ASPX page. When the page is NOT under SSL, the ajax calls work perfectly. When the page IS under SSL, the calls fail. I understand that this behavior must be due to the Same Origin Policy (SOP). So, how do I setup my WCF service to accept calls from an SSL-secured page? Does the WCF service also need to be secured? If so, how do I do this? Thanks, Joe

    Read the article

  • What is the best euphemism for a non-developer?

    - by Edward Tanguay
    I'm writing a description for a piece of software that targets the user who is "not technically minded", i.e. a person who uses "browser/office/email" and has a low tolerance for anything technical, he just "wants it to work" without being involved in any of the technical details. What is the best non-disparaging term you have seen to describe this kind of user? non-technical user low-tech user office user normal user technically challenged user non-developer computer joe Surely there is some official, politically-correct retronym for this kind of user that the press and software marketing use.

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Using summary data from dataprovider to populate chart.

    - by arunp
    In Flex, how do i create a summary(say total of various domains) from the data provider and display in chart? Say this is my dataprovider.. I want to display the total estimate of each territory as a slice in piechart private var dpFlat:ArrayCollection = new ArrayCollection([ {Region:"Southwest", Territory:"Arizona", Territory_Rep:"Barbara Jennings", Actual:38865, Estimate:40000}, {Region:"Southwest", Territory:"Arizona", Territory_Rep:"Dana Binn", Actual:29885, Estimate:30000}, {Region:"Southwest", Territory:"Central California", Territory_Rep:"Joe Smith", Actual:29134, Estimate:30000}, {Region:"Southwest", Territory:"Nevada", Territory_Rep:"Bethany Pittman", Actual:52888, Estimate:45000}, {Region:"Southwest", Territory:"Northern California", Territory_Rep:"Lauren Ipsum", Actual:38805, Estimate:40000}, {Region:"Southwest", Territory:"Northern California", Territory_Rep:"T.R. Smith", Actual:55498, Estimate:40000}, {Region:"Southwest", Territory:"Southern California", Territory_Rep:"Alice Treu", Actual:44985, Estimate:45000}, {Region:"Southwest", Territory:"Southern California", Territory_Rep:"Jane Grove", Actual:44913, Estimate:45000} ]);

    Read the article

  • postgresql syntax while exists loop

    - by veilig
    I'm working at function from Joe Celkos book - Trees and Hierarchies in SQL for Smarties I'm trying to delete a subtree from an adjacency list but part my function is not working yet. WHILE EXISTS –– mark leaf nodes (SELECT * FROM OrgChart WHERE boss_emp_nbr = -99999 AND emp_nbr > -99999) LOOP –– get list of next level subordinates DELETE FROM WorkingTable; INSERT INTO WorkingTable SELECT emp_nbr FROM OrgChart WHERE boss_emp_nbr = -99999; –– mark next level of subordinates UPDATE OrgChart SET emp_nbr = -99999 WHERE boss_emp_nbr IN (SELECT emp_nbr FROM WorkingTable); END LOOP; my question: is the WHILE EXISTS correct for use w/ postgresql? I appear to be stumbling and getting caught in an infinite loop in this part. Perhaps there is a more correct syntax I am unaware of.

    Read the article

  • Erlang on a JVM/CLR

    - by Fortyrunner
    I've just started reading Joe Armstrongs book on Erlang and listened to his excellent talk on Software Engineering Radio. Its an interesting language/system and one whose time seems to have come around with the advent of multi-core machines. My question is: what is there to stop it being ported to the JVM or CLR? I realise that both virtual machines aren't setup to run the lightweight processes that Erlang calls for - but couldn't these be simulated by threads? Could we see a lightweight or cutdown version of Erlang on a non Erlang VM?

    Read the article

  • How to select only the first rows for each unique value of a column

    - by nuit9
    Let's say I have a table of customer addresses: CName | AddressLine ------------------------------- John Smith | 123 Nowheresville Jane Doe | 456 Evergreen Terrace John Smith | 999 Somewhereelse Joe Bloggs | 1 Second Ave In the table, one customer like John Smith can have multiple addresses. I need the select query for this table to return only first row found where there are duplicates in 'CName'. For this table it should return all rows except the 3rd (or 1st - any of those two addresses are okay but only one can be returned). Is there a keyword I can add to the SELECT query to filter based on whether the server has already seen the column value before?

    Read the article

  • Is ActiveMQ unreliable?

    - by user122991
    Hello, We have been using ActiveMQ 5.2 in our distributed enterprise application for about 3 months. During that time, we have experienced debilitating failures at least twice weekly. In particular, we see: 1) Topic publisher has its connection arbitrarily closed and experiences EOF on attempt to publish. Note well that this issue is not a function of some timeout. It does not correlate reliably with any inactivity. 2) Queue listeners never receive message. Message simply sits on Queue. 2) is much rarer (hardly ever) than 1). In both cases, the failures are highly intermittent-- they cannot be reliably reproduced through any testing usage pattern. Also, there are no errors or warning in the AMQ logs. Have others experienced similar problems? Is there an opinion that some other JMS provider is more reliable? thanks, Joe

    Read the article

  • hibernate Query by primary key

    - by adisembiring
    Hi ... I wanna create query by primary key. Supposed I have class primary key, PersonKey, the properties is name and id. I have Person class, the property is PersonKey, address, DOB. Now, I wanna search person by primary key. First, I create instance of PersonKey, and set the name become: joe, and id become:007 can I get the person by ID, by pass the key variable ??? person.findByKey(someKey); , but the logic do not criteria

    Read the article

  • Objective-C Custom extend

    - by ryanjm.mp
    I have a couple classes that have nearly identical code. Only a string or two is different between them. What I would like to do is to make them from another class that defines those functions and then uses constants or something else to define those strings that are different. I'm not sure if "___" is inheritance or extending or what. That is what I need help with. For example: objectA.m: -(void)helloWorld { NSLog("Hello %@",child.name); } objectBob.m: #define name @"Bob" objectJoe.m #define name @"Joe" (I'm not sure if it's legal to define strings, but this gets the point across) It would be ideal if objectBob.m and objectJoe.m didn't have to even define the methods, just their relationship to objectA.m. Is there any way to do something like this? If all else fails I'll just make objectA.m: -(void)helloWorld:(NSString *name) { NSLog("Hello %@",name); } And have the other files call that function (and just #import objectA.m).

    Read the article

  • Turning A Stacked List into workable data

    - by BoSox
    In Excel I have a list of names that in the cell appear stacked, and I want each name in its own column. I was thinking Python may be a good way to do this? Example: Joe Smith John Hawk Mike Green Lauren Smith One cell will look exactly like that, with each name on its line within the cell but all of the names contained in the cell. I have 50 cells each with 1-20 stacked names and I want to put each name in its own cell on a given row. So, in my example all of those names would occupy the same row but each would have their own column. Any ideas?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >