Search Results

Search found 25180 results on 1008 pages for 'post processing'.

Page 170/1008 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • FluentNHibernate - ClassMap vs IAutoMappingOverride

    - by Mendy
    In FluentNHibernate when should I use ClassMap and when IAutoMappingOverride<Entity> for my EntityMap classes. public class PostMap : ClassMap<Post> { public PostMap() { ... } } vs public class PostMap : IAutoMappingOverride<Post> { public void Override(AutoMapping<Post> mapping) { ... } }

    Read the article

  • Detect, Analyze, Act – Fast!

    - by Ajay Khanna
    In fast changing business environment, it becomes crucial to identify business opportunities and business issues as soon as possible. If identified at the right time, business managers can address issues before they escalate to serious problems and can take advantage of the new opportunities before the competition does. Moreover, they have to be efficient to do this at the right cost. Success depends on how responsive organization is to emerging events and changing environment. These events can be customer issues, competition moves, changes in regulations, or changes in company policies. In order to be responsive in such situations, organizations need to first identify and track these situations. They can do that via business activity monitoring (BAM) and complex event processing (CEP). A unified monitoring dashboard helps put together a comprehensive picture of the situation in hand and provides deep insight to take proper actions. With CEP, businesses can connect all the relevant events, detect event patterns and take immediate actions using Business Process Management system.   So to be responsive we need: Real-Time Visibility with Business Activity Monitoring You can use BAM technology to monitor progress, track performance, meet service-level agreements (SLAs), manage exceptions, and issue alerts to an employee or application when a process is not functioning properly—all in real time. A unified monitoring dashboard helps you maintain a complete picture of each situation so you can take action effectively. BAM works hand in hand with BPM software to discover the significant activities that drive business success.   Real-Time Sense and Respond An event-driven BPM solution enables each step in a business process to be informed not only by the previous step, but also by any other step, data, and pattern of behavior deemed relevant to that step. This gives the company the ability to “sense and respond.” You can describe interesting event patterns and event correlations and monitor the business in real-time. Whenever a pre-defined pattern emerges you can take actions like raising alerts, notifications, or kicking off another business process. This synergy possible by integrating activity monitoring, event processing, and BPM makes it possible for managers to keep a finger on the pulse of their business. Business managers can now respond to customers faster, respond to competition faster, reduce fraud and do more cross-selling. Read more about being responsive in the whitepaper “The Instantly Responsive Enterprise: Integrating BPM and Complex Event Processing” in BPM Resource Kit.

    Read the article

  • Whats wrong with this task queue setup?

    - by Peter Farmer
    I've setup this task queue implementation on a site I host for a customer, it has a cron job which runs each morning at 2am "/admin/tasks/queue", this queues up emails to be sent out, "/admin/tasks/email", and uses cursors so as to do the queuing in small chunks. For some reason last night /admin/tasks/queue kept getting run by this code and so sent out my whole quota of emails :/. Have I done something wrong with this code? class QueueUpEmail(webapp.RequestHandler): def post(self): subscribers = Subscriber.all() subscribers.filter("verified =", True) last_cursor = memcache.get('daily_email_cursor') if last_cursor: subscribers.with_cursor(last_cursor) subs = subscribers.fetch(10) logging.debug("POST - subs count = %i" % len(subs)) if len(subs) < 10: logging.debug("POST - Less than 10 subscribers in subs") # Subscribers left is less than 10, don't reschedule the task for sub in subs: task = taskqueue.Task(url='/admin/tasks/email', params={'email': sub.emailaddress, 'day': sub.day_no}) task.add("email") memcache.delete('daily_email_cursor') else: logging.debug("POST - Greater than 10 subscibers left in subs - reschedule") # Subscribers is 10 or greater, reschedule for sub in subs: task = taskqueue.Task(url='/admin/tasks/email', params={'email': sub.emailaddress, 'day': sub.day_no}) task.add("email") cursor = subscribers.cursor() memcache.set('daily_email_cursor', cursor) task = taskqueue.Task(url="/admin/tasks/queue", params={}) task.add("queueup")

    Read the article

  • RHEL Cluster FAIL after changing time on system

    - by Eugene S
    I've encountered a strange issue. I had to change the time on my Linux RHEL cluster system. I've done it using the following command from the root user: date +%T -s "10:13:13" After doing this, some message appeared relating to <emerg> #1: Quorum Dissolved however I didn't capture it completely. In order to investigate the issue I looked at /var/log/messages and I've discovered the following: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering GATHER state from 0. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Creating commit token because I am the rep. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Storing new sequence id for ring 354 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering COMMIT state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering RECOVERY state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] position [0] member 192.168.1.49: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] previous ring seq 848 rep 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] aru 61 high delivered 61 received flag 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Did not need to originate any messages in recovery. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Sending initial ORF token Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.51) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CMAN ] quorum lost, blocking activity Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [SYNC ] This node is within the primary component and will provide service. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering OPERATIONAL state. Mar 22 16:40:42 hsmsc50sfe1a kernel: dlm: closing connection to node 2 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] got nodejoin message 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a clurgmgrd[25809]: <emerg> #1: Quorum Dissolved Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CPG ] got joinlist message from node 1 Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Cluster is not quorate. Refusing connection. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing connect: Connection refused Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Invalid descriptor specified (-21). Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Someone may be attempting something evil. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing disconnect: Invalid request descriptor Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering GATHER state from 9. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Creating commit token because I am the rep. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Storing new sequence id for ring 358 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering COMMIT state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering RECOVERY state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] position [0] member 192.168.1.49: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] previous ring seq 852 rep 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] aru f high delivered f received flag 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] position [1] member 192.168.1.51: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] previous ring seq 852 rep 192.168.1.51 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] aru f high delivered f received flag 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Did not need to originate any messages in recovery. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] Sending initial ORF token Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] CLM CONFIGURATION CHANGE Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] New Configuration: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.49) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.51) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Left: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] Members Joined: Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] #011r(0) ip(192.168.1.51) Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [SYNC ] This node is within the primary component and will provide service. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [TOTEM] entering OPERATIONAL state. Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [MAIN ] Node chb_sfe2a not joined to cman because it has existing state Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] got nodejoin message 192.168.1.49 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CLM ] got nodejoin message 192.168.1.51 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CPG ] got joinlist message from node 1 Mar 22 16:40:42 hsmsc50sfe1a openais[25715]: [CPG ] got joinlist message from node 2 Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Cluster is not quorate. Refusing connection. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing connect: Connection refused Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Invalid descriptor specified (-111). Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Someone may be attempting something evil. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing get: Invalid request descriptor Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Invalid descriptor specified (-21). Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Someone may be attempting something evil. Mar 22 16:40:42 hsmsc50sfe1a ccsd[25705]: Error while processing disconnect: Invalid request descriptor How could this be related to the time change procedure I performed?

    Read the article

  • Multicore Expo

    Event to be held in conjunction with ESC Multi-core processor - Central processing unit - Parallel computing - X86 - Operating system

    Read the article

  • Broken package after update: linux-headers, error brokencount >0

    - by escozul
    Ubuntu 12.04. After an update, I get a red warning icon in the system tray, warning about an error: broken count 0 Opening Update manager, I see that the broken package is linux-headers-3.2.0-33-generic-pae (new install) Specificaly I have my ubuntu on an AspireOne with 8gb internal storage. I tried apt-get clean as suggested in another question on this site, and tried reinstalling the package in Synaptic. I have tried to reboot but to no avail. I have also tried apt-get install --fix-broken and I get the following: sudo apt-get install --fix-broken [sudo] password for elina: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-headers-3.2.0-33-generic-pae The following NEW packages will be installed: linux-headers-3.2.0-33-generic-pae 0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded. 1 not fully installed or removed. Need to get 0 B/977 kB of archives. After this operation 11,3 MB of additional disk space will be used. Do you want to continue [Y/n]; y (Reading database ... 437051 files and directories currently installed.) Unpacking linux-headers-3.2.0-33-generic-pae (from .../linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb) ... dpkg: error processing /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb (--unpack): unable to create `/usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h.dpkg-new' (while processing `./usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h'): No space left on device No apport report written because the error message indicates a disk full error dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried all suggestions I could find: sudo apt-get clean sudo apt-get autoclean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install sudo apt-get install --fix-broken Then I saw that on the error there was a mention about free space. So I did a df -h and the result was: Filesystem Size Used Avail Use% Mounted on /dev/sda1 7,0G 5,5G 1,1G 84% / udev 235M 4,0K 235M 1% /dev tmpfs 97M 816K 96M 1% /run none 5,0M 0 5,0M 0% /run/lock none 242M 352K 242M 1% /run/shm I see that on my root folder I have 1.1Gb free. The broken package is linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb which only takes up 11.3Mb on my hard drive. I'm soooo lost. I really hope there is something I'm missing here. I don't want to go about reformatting this bucket. It's really not worth the time. Any help for fixing this would be hot.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • Is it better to build HTML Code string on the server or on the client side?

    - by Ionut
    The result of the following process should be a html form. This form's structure varies from one to user. For example there might be a different number of rows or there may be the need for rowspan and colspan. When the user chooses to see this table an ajax call is made to the server where the structure of the table is decided from the database. Then I have to create the html code for the table structure which will be inserted in the DOM via JavaScript. The following problem comes to my mind: Where should I build the HTML code which will be inserted in the DOM? On the server side or should I send some parameters in the ajax call method and process the structure there? Therefore the main question involves good practice when it comes to decide between Server side processing or client side processing. Thank you!

    Read the article

  • Django: ordering by backward related field property

    - by Silver Light
    Hello! I have two models related one-to-many: a Post and a Comment: class Post(models.Model): title = models.CharField(max_length=200); content = models.TextField(); class Comment(models.Model): post = models.ForeignKey('Post'); body = models.TextField(); date_added = models.DateTimeField(); I want to get a list of posts, ordered by the date of the latest comment. If I would write a custom SQL query it would look like this: SELECT `posts`.`*`, MAX(`comments`.`date_added`) AS `date_of_lat_comment` FROM `posts`, `comments` WHERE `posts`.`id` = `comments`.`post_id` GROUP BY `posts`.`id` ORDER BY `date_of_lat_comment` DESC How can I do same thing using django ORM?

    Read the article

  • How can i have custom fields on the posts page in wordpress?

    - by Luccas
    First I've created a home.php page to replace index.php and can add some custom fields on this new one and to have in it lastest 3 posts. On home.php page I put: <?php echo get_post_meta($post->ID, 'test', true); ?>"/> but it doesn't works cause it tries get the post id and not the id of the page. If i put 18 (id of the page) directly it works, but i want it dinamicaly: <?php echo get_post_meta(18, 'test', true); ?>"/> And this condition is not satisfied for test: if($post->post_type == 'page'){ echo 'This item is a page and the ID is: '.$post->ID; }

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • passing request params from jQuery to jersey service using json

    - by ccduga
    hi, im trying to POST (cross domain) some data to a jersey web service and retrieve a response (a GenericEntity object). The post successfully gets mapped to my jersey endpoint however when i pull the parameters from the request they are empty.. $ .ajax({ type: "POST", dataType: "application/json; charset=utf-8", url: jerseyNewUserUrl+'?jsoncallback=?', data:{'id':id, 'firstname':firstname,'lastname':lastname}, success: function(data, textStatus) { $('#jsonResult').html("some data: " + data.responseMsg); }, error: function ( XMLHttpRequest, textStatus, errorThrown){ alert('error'); } }); this is my jersey endpoint.. @POST @Produces( { "application/x-javascript", MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML }) @Path("/new") public JSONWithPadding addNewUser(@QueryParam("jsoncallback") @DefaultValue("empty") final String argJsonCallback, @QueryParam("id") final String argID, @QueryParam("firstname") final String argFirstName, @QueryParam("lastname") final String argLastName) is there something missing from my $.ajax call?

    Read the article

  • Handing over FaceBook SessionKey from iPhone to PHP

    - by Stefan Mayr
    Hi! Users authenticate normally on my iPhone app, and the permission for publish_stream is requested correctly. When the user want to post something on his wall, the text is transmitted to my backend and the backend posts it (with the transmitted session key) to the users facebook wall. The first post is always possible but at the second, the response is: "The user hasn't authorized the application to perform this action" But I was already able to post some seconds ago. What is wrong? Best Steve

    Read the article

  • C# Joins/Where with Linq and Lambda

    - by David
    Hello, I'm having trouble with a query written in Linq and Lambda. So far, I'm getting allot of errors here's my code: int id = 1; var query = database.Posts.Join(database.Post_Metas, post => database.Posts.Where(x => x.ID == id), meta => database.Post_Metas.Where(x => x.Post_ID == id), (post, meta) => new { Post = post, Meta = meta }); I'm new to using Linq, so I'm not sure if this query is correct.

    Read the article

  • How to access method variables from within an anonymous function in JavaScript?

    - by Hussain
    I'm writing a small ajax class for personal use. In the class, I have a "post" method for sending post requests. The post method has a callback parameter. In the onreadystatechange propperty, I need to call the callback method. Something like this: this.requestObject.onreadystatechange = function() { callback(this.responseText); } However, I can't access the callback variable from within the anonomous function. How can I bring the callback variable into the scope of the onreadystatechange anonomous function? edit: Here's the full code so far: function request() { this.initialize = function(errorHandeler) { try { try { this.requestObject = new XDomainRequest(); } catch(e) { try { this.requestObject = new XMLHttpRequest(); } catch (e) { try { this.requestObject = new ActiveXObject("Msxml2.XMLHTTP"); //newer versions of IE5+ } catch (e) { this.requestObject = new ActiveXObject("Microsoft.XMLHTTP"); //older versions of IE5+ } } } } catch(e) { errorHandeler(); } } this.post = function(url,data) { var response;var escapedData = ""; if (typeof data == 'object') { for (i in data) { escapedData += escape(i)+'='+escape(data[i])+'&'; } escapedData = escapedData.substr(0,escapedData.length-1); } else { escapedData = escape(data); } this.requestObject.open('post',url,true); this.requestObject.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); this.requestObject.setRequestHeader("Content-length", data.length); this.requestObject.setRequestHeader("Connection", "close"); this.requestObject.onreadystatechange = function() { if (this.readyState == 4) { // call callback function } } this.requestObject.send(data); }

    Read the article

  • Creating a smart text generator

    - by royrules22
    I'm doing this for fun (or as 4chan says "for teh lolz") and if I learn something on the way all the better. I took an AI course almost 2 years ago now and I really enjoyed it but I managed to forget everything so this is a way to refresh that. Anyway I want to be able to generate text given a set of inputs. Basically this will read forum inputs (or maybe Twitter tweets) and then generate a comment based on the learning. Now the simplest way would be to use a Markov Chain Text Generator but I want something a little bit more complex than that as the MKC basically only learns by word order (which word is more likely to appear after word x given the input text). I'm trying to see if there's something I can do to make it a little bit more smarter. For example I want it to do something like this: Learn from a large selection of posts in a message board but don't weight it too much For each post: Learn from the other comments in that post and weigh these inputs higher Generate comment and post See what other users' reaction to your post was. If good weigh it positively so you make more posts that are similar to the one made, and vice versa if negative. It's the weighing and learning from mistakes part that I'm not sure how to implement. I thought about Artificial Neural Networks (mainly because I remember enjoying that chapter) but as far as I can tell that's mainly used to classify things (i.e. given a finite set of choices [x1...xn] which x is this given input) not really generate anything. I'm not even sure if this is possible or if it is what should I go about learning/figuring out. What algorithm is best suited for this? To those worried that I will use this as a bot to spam or provide bad answers to SO, I promise that I will not use this to provide (bad) advice or to spam for profit. I definitely will not post it's nonsensical thoughts on SO. I plan to use it for my own amusement. Thanks!

    Read the article

  • Clicking on viewlist link in email alert sent for postlist redirecting to http://url/blogs/Lists /Po

    - by Sarita Mishra
    Hi, We have a Blogs site and post list. Users subscribes to the list and get email alert whenever any change made to the post list. In the email alert sent contains the heading giveb below : Modify my alert settings| View The ‘Colour of Energy’ – now on ...| View Posts View The ‘Colour of Energy’ – now on ... is the link for the post for which user has get the email alert. It is redirecting to the URL ://url/blogs/Lists /Posts/Dispform.aspx?ID=x, which is giving Page cannot be found error. It should redierct to ://url/blogs/Lists /Posts/Post.aspx?ID=x. I want to change the hyperlink URL to the above one. Please suggest as how to proceed with that.

    Read the article

  • Enumerable Contains Enumerable

    - by Tim
    For a method I have the following parameter IEnumerable<string> tags and with to query a list of objects, let's call them Post, that contains a property IEnumerable<string> Tags { get; set; }. My question is: How do I use linq to query for objects that contains all the tags from the tags parameter? private List<Post> posts = new List<Post>(); public IEnumerable<Post> GetPostsWithTags(IEnumerable<string> tags) { return ???; }

    Read the article

  • Check request type in Django

    - by Art
    While it is recommended to use the following construct to check whether request is POST, if request.method == 'POST': pass It is likely that people will find if request.POST: pass to be more elegant and concise. Are there any reasons not to use it, apart from personal preference?

    Read the article

  • clear cookie container in WebRequest

    - by Jeremy
    I'm using the WebRequest object to post data to a login page, then post data to a seperate page on the same site. I am instantiating a CookieContainer and assigning it to the WebRequest object so that the cookies are handled. The problem is that I do not want to retain the cookie after I post data to the other page. How can I delete that cookie?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >