Search Results

Search found 1056 results on 43 pages for 'twenty ten'.

Page 39/43 | < Previous Page | 35 36 37 38 39 40 41 42 43  | Next Page >

  • Rules for Naming

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Naming Documents (or is it “Document, Naming”?) Tis but thy name that is my enemy; Thou art thyself, though not a Montague. What's Montague? It is nor hand, nor foot, Nor arm, nor face, nor any other part Belonging to a man. O, be some other name! What's in a name? That which we call a rose By any other name would smell as sweet; So Romeo would, were he not Romeo call'd, Retain that dear perfection which he owes Without that title. Romeo, doff thy name And for that name which is no part of thee Take all myself.  Shakespeare – Romeo and Juliet Act II, Scene 2 We normally only use the bold portion of the famous Shakespearean quote above, but it is really out of context. As the play unfolds, we learn that a name is all too powerful. Indeed it is because of their names that the doomed lovers die. There might be life and death in a name (BTW, when I wrote this monogram, I was in Hatfield, PA. Remember the Hatfields and the McCoys?) This is a bit extreme, but in the field of Knowledge Management (KM) names are of the utmost importance as well. When I write an article about managing SharePoint sites, how should I name it? “Managing a site” or “Site, managing”? Nine times out of ten I’d opt for the latter. Almost everything we do is “Managing” so to make life easier for a person looking for meaningful content, we title our articles starting with the differentiator rather than the common factor. As a rule of thumb, we start the name with the noun rather than the verb. It is not what we do that is the primary key; it is what we do it to. So, answer this – is it a “rule of thumb” or a “thumb rule?” This is tough. A lot of what we do when naming is a judgment call. Both thumb and rule are nouns, albeit concrete and abstract (more about this later), but to most people “thumb rule” is meaningless while “rule of thumb” is an idiom. The difference between knowledge and information is that knowledge is meaningful information placed in context. Thus I elect the “rule of thumb”. It is the more meaningful title. Abstract and Concrete are relative terms. Many nouns (and verbs) that are abstract to a commoner, are concrete to a practitioner of one profession or another and may even have different concrete meanings in different professional jargons. Think about “running”. To an executive it means running a business, to a marathoner its meaning is much more literal. Generally speaking, we store and disseminate knowledge within a practice more than we do it in general. Even dictionaries encyclopedias define terms as they apply to different audiences. The rule of thumb is to put the more concrete first, but within the audience’s jargon. Even the title of this monogram is a question. Do I name it “Naming Documents” or “Documents, Naming”? Well, my own rule of thumb (“Here he goes again!?”) states that the latter is better because it starts with a noun, but this is a document about naming more than it about documents. The rules of naming also apply to graphs and charts, excel spreadsheets, and so on. Thus, I vote for the former.  A better title could have been “Naming Objects” only the word “Object” is a bit too abstract. How about just “Naming” or “Naming, rules of”? You get the drift. One of the ways to resolve all of this is to store the documents in Knowledge-Bases, which may become the subjects of a future punditry. Knowledge bases use keywords to describe their content.  Use a Metadata store for the keywords to at least attempt some common grounds. Here is another general rule (rule of thumb?!!) – put at least the one keyword in the title. Use subtitles. Here is an example: Migrating documents – Screening, cleaning, and organizing our knowledge. The main keyword is “documents”, next is “migrating”, other keywords also appear in the subtitle. They are “screening”, “cleaning”, and “organizing”. Any questions? Send me an amply named document by email: [email protected]

    Read the article

  • Application Composer: Exposing Your Customizations in BI Analytics and Reporting

    - by Richard Bingham
    Introduction This article explains in simple terms how to ensure the customizations and extensions you have made to your Fusion Applications are available for use in reporting and analytics. It also includes four embedded demo videos from our YouTube channel (if they don't appear check the browser address bar for a blocking shield icon). If you are new to Business Intelligence consider first reviewing our getting started article, and you can read more about the topic of custom subject areas in the documentation book Extending Sales. There are essentially four sections to this post. First we look at how custom fields added to standard objects are made available for reporting. Secondly we look at creating custom subject areas on the standard objects. Next we consider reporting on custom objects, starting with simple standalone objects, then child custom objects, and finally custom objects with relationships. Finally this article reviews how flexfields are exposed for reporting. Whilst this article applies to both Cloud/SaaS and on-premises deployments, if you are an on-premises developer then you can also use the BI Administration Tool to customize your BI metadata repository (the RPD) and create new subject areas. Whilst this is not covered here you can read more in Chapter 8 of the Extensibility Guide for Developers. Custom Fields on Standard Objects If you add a custom field to your standard object then it's likely you'll want to include it in your reports. This is very simple, since all new fields are instantly available in the "[objectName] Extension" folder in existing subject areas. The following two minute video demonstrates this. Custom Subject Areas for Standard Objects You can create your own subject areas for use in analytics and reporting via Application Composer. An example use-case could be to simplify the seeded subject areas, since they sometimes contain complex data fields and internal values that could confuse business users. One thing to note is that you cannot create subject areas in a sandbox, as it is not supported by BI, so once your custom object is tested and complete you'll need to publish the sandbox before moving forwards. The subject area creation processes is essentially two-fold. Once the request is submitted the ADF artifacts are generated, then secondly the related metadata is sent to the BI presentation server API's to make the updates there. One thing to note is that this second step may take up to ten minutes to complete. Once finished the status of the custom subject area request should show as 'OK' and it is then ready for use. Within the creation processes wizard-like steps there are three concepts worth highlighting: Date Flattening - this feature permits the roll up of reports at various date levels, such as data by week, month, quarter, or year. You simply check the box to enable it for that date field. Measures - these are your own functions that you can build into the custom subject area. They are related to the field data type and include min-max for dates, and sum(), avg(), and count() for  numeric fields. Implicit Facts - used to make the BI metadata join between your object fields and the calculated measure fields. The advice is to choose the most frequently used measure to ensure consistency. This video shows a simple example, where a simplified subject area is created for the customer 'Contact' standard object, picking just a few fields upon which users can then create reports. Custom Objects Custom subject areas support three types of custom objects. First is a simple standalone custom object and for which the same process mentioned above applies. The next is a custom child object created on a standard object parent, and finally a custom object that is related to a parent object - usually through a dynamic choice list. Whilst the steps in each of these last two are mostly the same, there are differences in the way you choose the objects and their fields. This is illustrated in the videos below.The first video shows the process for creating a custom subject area for a simple standalone custom object. This second video demonstrates how to create custom subject areas for custom objects that are of parent:child type, as well as those those with dynamic-choice-list relationships. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt; Flexfields Dynamic and Extensible Flexfields satisfy a similar requirement as custom fields (for Application Composer), with flexfields common across the Fusion Financials, Supply Chain and Procurement, and HCM applications. The basic principle is when you enable and configure your flexfields, in the edit page under each segment region (for both global and context segments) there is a BI Enabled check box. Once this is checked and you've completed your configuration, you run the Scheduled Process job named 'Import Oracle Fusion Data Extensions for Transactional Business Intelligence' to generate and migrate the related BI artifacts and data. This applies for dynamic, key, and extensible flexfields. Of course there is more to consider in terms of how you wish your flexfields to be implemented and exposed in your reports, and details are given in Chapter 4 of the Extending Applications guide.

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • How to set text in Y-axis, instead of numbers, in a RadChart component from Telerik with Bar-type

    - by radbyx
    Hi, I have made an bar RadChart with "SeriesOrientation="Horizontal"". I have the text showing at the end for each bars, but instead I would like that text to be listet in the y-axis, instead of the 1,2,3.. numbers. It seems like i'm not allow to set any text in the y-axis, is there a property I can set? Here is my code snippes: === .ascx === <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <telerik:RadChart ID="RadChart1" runat="server" Skin="WebBlue" AutoLayout="true" Height="350px" Width="680px" SeriesOrientation="Horizontal"> <Series> <telerik:ChartSeries DataYColumn="UnitPrice" Name="Product Unit Price"> </telerik:ChartSeries> </Series> <PlotArea> <YAxis> <Appearance> <TextAppearance TextProperties-Font="Verdana, 8.25pt, style=Bold" /> </Appearance> </YAxis> <XAxis DataLabelsColumn="TenMostExpensiveProducts"> </XAxis> </PlotArea> <ChartTitle> <TextBlock Text="Ten Most Expensive Products" /> </ChartTitle> </telerik:RadChart> </ContentTemplate> </asp:UpdatePanel> ========================= === .ascx === protected void Page_Load(object sender, EventArgs e) { RadChart1.AutoLayout = false; RadChart1.Legend.Visible = false; // Create a ChartSeries and assign its name and chart type ChartSeries chartSeries = new ChartSeries(); chartSeries.Name = "Name"; chartSeries.Type = ChartSeriesType.Bar; // add new items to the series, // passing a value and a label string chartSeries.AddItem(98, "Product1"); chartSeries.AddItem(95, "Product2"); chartSeries.AddItem(100, "Product3"); chartSeries.AddItem(75, "Product4"); chartSeries.AddItem(1, "Product5"); // add the series to the RadChart Series collection RadChart1.Series.Add(chartSeries); // add the RadChart to the page. // this.Page.Controls.Add(RadChart1); // RadChart1.Series[0].Appearance.LegendDisplayMode = ChartSeriesLegendDisplayMode.Nothing; // RadChart1.Series[0].DataYColumn = "Uptime"; RadChart1.PlotArea.XAxis.DataLabelsColumn = "Name"; RadChart1.PlotArea.XAxis.Appearance.TextAppearance.TextProperties.Font = new System.Drawing.Font("Verdana", 8); RadChart1.BackColor = System.Drawing.Color.White; RadChart1.Height = 350; RadChart1.Width = 570; RadChart1.DataBind(); } I want to have the text: "Product1", "Product2", ect in the y-axis, can anyone help? Thx.

    Read the article

  • Uploading multiple images through Tumblr API

    - by Joseph Carrington
    I have about 300 images I want to upload to my new Tumblr account, becuase my old wordpress site got hacked and I no longer wish to use wordpress. I uploaded one image a day for 300 days, and I'd like to be able to take these images and upload them to my tumblr site using the api. The images are currently local, stored in /images/. They all have the date they were uploaded as the first ten characters of the filename, (01-01-2009-filename.png) and I went to send this date parameter along as well. I want to be able to see the progress of the script by outputting the responses from the API to my error_log. Here is what I have so far, based on the tumblr api page. // Authorization info $tumblr_email = '[email protected]'; $tumblr_password = 'password'; // Tumblr script parameters $source_directory = "images/"; // For each file, assign the file to a pointer here's the first stumbling block. How do I get all of the images in the directory and loop through them? Once I have a for or while loop set up I assume this is the next step $post_data = fopen(dir(__FILE__) . $source_directory . $current_image, 'r'); $post_date = substr($current_image, 0, 10); // Data for new record $post_type = 'photo'; // Prepare POST request $request_data = http_build_query( array( 'email' => $tumblr_email, 'password' => $tumblr_password, 'type' => $post_type, 'data' => $post_data, 'date' => $post_date, 'generator' => 'Multi-file uploader' ) ); // Send the POST request (with cURL) $c = curl_init('http://www.tumblr.com/api/write'); curl_setopt($c, CURLOPT_POST, true); curl_setopt($c, CURLOPT_POSTFIELDS, $request_data); curl_setopt($c, CURLOPT_RETURNTRANSFER, true); $result = curl_exec($c); $status = curl_getinfo($c, CURLINFO_HTTP_CODE); curl_close($c); // Output response to error_log error_log($result); So, I'm stuck on how to use PHP to read a file directory, loop through each of the files, and do things to the name / with the file itself. I also need to know how to set the data parameter, as in choosing multi-part / formdata. I also don't know anything about cURL.

    Read the article

  • Get specifc value from JSon string using JSon.Net

    - by dean nolan
    I am trying to get a value from a JSon formatted string. It was to get album info from a website called Freebase. My result is like this: { "a0": { "code": "/api/status/error", "messages": [ { "code": "/api/status/error/mql/result", "info": { "count": 20, "result": [ { "album": [ { "name": "Definitely Maybe", "release_date": "1994-08-30" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Most Wanted Rock 1", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Alternative 90s", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Live Forever: Best of Britpop", "release_date": "2003-03-03" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Best... In the World... Ever! Volume 5", "release_date": "1997-03-31" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Live 4 Ever", "release_date": "1998-06-29" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "De Afrekening, Volume 8", "release_date": "1994" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Now That's What I Call Music! 33", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Q: Anthems", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Best Anthems... Ever! Volume 2", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "1995 Mercury Music Prize: Ten Albums of the Year", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Now That's What I Call Music! 1994", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Indie Top 20, Volume 21", "release_date": "1995" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Dad Rocks!", "release_date": "2006-06-05" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Untitled", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Greatest Hits of 1994", "release_date": "1994-10" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Top of the Pops 2", "release_date": "2000-03-27" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Q: Anthems (disc 1)", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Jamie Oliver's Cookin'", "release_date": "2001" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Killer Buzz", "release_date": "2001" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" } ] }, "message": "Unique query may have at most one result. Got 20", "path": "", "query": { "album": [ { "name": null, "release_date": null, "sort": "release_date" } ], "artist": "Oasis", "error_inside": ".", "name": "Live Forever", "type": "/music/track" } } ] }, "code": "/api/status/ok", "status": "200 OK", "transaction_id": "cache;cache04.p01.sjc1:8101;2010-03-30T18:04:20Z;0035" } I am looking to get the first album title, Definitely Maybe, from this list. I have tried parsing the string like this: JObject o = JObject.Parse(jsonString); string album = (string)o[""]; But no matter what I have tried I don't know what to put in those quotes. How would I get this specific value or be able to search for it? Thanks

    Read the article

  • javascript - multiple dependent/cascading/chained select boxes on same form

    - by Aaron
    I'm populating select box options via jquery and json but I'm not sure how to address multiple instances of the same chained select boxes within my form. Because the select boxes are only rendered when needed some records will have ten sets of chained select boxes and others will only need one. How does one generate unique selects to support the auto population of secondary select options? Here's the code I'm using, and I thank you in advance for any insight you may provide. <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript"> function populateACD() { $.getJSON('/acd/acd2json.php', {acdSelect:$('#acdSelect').val()}, function(data) { var select = $('#acd2'); var options = select.attr('options'); $('option', select).remove(); $.each(data, function(index, array) { options[options.length] = new Option(array['ACD2']); }); }); } $(document).ready(function() { populateACD(); $('#acdSelect').change(function() { populateACD(); }); }); </script> <?php require_once('connectvars.php'); $dbc = mysqli_connect(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME); $squery = "SELECT ACD1ID, ACD1 from ACD1"; $sdata = mysqli_query($dbc, $squery); // Loop through the array of ACD1s, placing them into an option of a select echo '<select name="acdSelect" id="acdSelect">'; while ($row = mysqli_fetch_array($sdata)) { echo "<option value=" . $row['ACD1ID'] . ">" . $row['ACD1'] . "</option>\n"; } echo '</select><br /><br />'; <select name="acd2" id="acd2"> </select> acd2json.php <?php $dsn = "mysql:host=localhost;dbname=wfn"; $user = "acd"; $pass = "***************"; $pdo = new PDO($dsn, $user, $pass); $rows = array(); if(isset($_GET['acdSelect'])) { $stmt = $pdo->prepare("SELECT ACD2 FROM ACD2 WHERE ACD1ID = ? ORDER BY ACD2"); $stmt->execute(array($_GET['acdSelect'])); $rows = $stmt->fetchAll(PDO::FETCH_ASSOC); } echo json_encode($rows); ?>

    Read the article

  • Using Microsoft.Office.Interop to save created file with C#

    - by Eyla
    I have the this code that will create excel file and work sheet then insert same values. The problem I'm facing that I'm not able to save the file with name giving ten colse it. I used SaveAs but did not work: wb.SaveAs(@"C:\mymytest.xlsx", missing, missing, missing, missing, missing, XlSaveAsAccessMode.xlExclusive, missing, missing, missing, missing, missing); this line of code would give me this error: Microsoft Office Excel cannot access the file 'C:\A3195000'. There are several possible reasons: • The file name or path does not exist. • The file is being used by another program. • The workbook you are trying to save has the same name as a currently open workbook. please advice to solve this problem. here is my code: private void button1_Click(object sender, EventArgs e) { Microsoft.Office.Interop.Excel.Application xlApp = new Microsoft.Office.Interop.Excel.Application(); if (xlApp == null) { MessageBox.Show("EXCEL could not be started. Check that your office installation and project references are correct."); return; } xlApp.Visible = true; Workbook wb = xlApp.Workbooks.Add(XlWBATemplate.xlWBATWorksheet); Worksheet ws = (Worksheet)wb.Worksheets[1]; if (ws == null) { MessageBox.Show("Worksheet could not be created. Check that your office installation and project references are correct."); } // Select the Excel cells, in the range c1 to c7 in the worksheet. Range aRange = ws.get_Range("C1", "C7"); if (aRange == null) { MessageBox.Show("Could not get a range. Check to be sure you have the correct versions of the office DLLs."); } // Fill the cells in the C1 to C7 range of the worksheet with the number 6. Object[] args = new Object[1]; args[0] = 6; aRange.GetType().InvokeMember("Value", BindingFlags.SetProperty, null, aRange, args); // Change the cells in the C1 to C7 range of the worksheet to the number 8. aRange.Value2 = 8; // object missing = Type.Missing; // wb.SaveAs(@"C:\mymytest.xlsx", missing, missing, missing, missing, //missing, XlSaveAsAccessMode.xlExclusive, missing, missing, missing, missing, //missing); }

    Read the article

  • C# SerialPort - Problems mixing ports with different baud rates.

    - by GrandAdmiral
    Greetings, I have two devices that I would like to connect over a serial interface, but they have incompatible connections. To get around this problem, I connected them both to my PC and I'm working on a C# program that will route traffic on COM port X to COM port Y and vice versa. The program connects to two COM ports. In the data received event handler, I read in incoming data and write it to the other COM port. To do this, I have the following code: private void HandleDataReceived(SerialPort inPort, SerialPort outPort) { byte[] data = new byte[1]; while (inPort.BytesToRead > 0) { // Read the data data[0] = (byte)inPort.ReadByte(); // Write the data if (outPort.IsOpen) { outPort.Write(data, 0, 1); } } } That code worked fine as long as the outgoing COM port operated at a higher baud rate than the incoming COM port. If the incoming COM port was faster than the outgoing COM port, I started missing data. I had to correct the code like this: private void HandleDataReceived(SerialPort inPort, SerialPort outPort) { byte[] data = new byte[1]; while (inPort.BytesToRead > 0) { // Read the data data[0] = (byte)inPort.ReadByte(); // Write the data if (outPort.IsOpen) { outPort.Write(data, 0, 1); while (outPort.BytesToWrite > 0); //<-- Change to fix problem } } } I don't understand why I need that fix. I'm new to C# (this is my first program), so I'm wondering if there is something I am missing. The SerialPort defaults to a 2048 byte write buffer and my commands are less than ten bytes. The write buffer should have the ability to buffer the data until it can be written to a slower COM port. In summary, I'm receiving data on COM X and writing the data to COM Y. COM X is connected at a faster baud rate than COM Y. Why doesn't the buffering in the write buffer handle this difference? Why does it seem that I need to wait for the write buffer to drain to avoid losing data? Thanks!

    Read the article

  • How to Practice Unix Programming in C?

    - by danben
    After five years of professional Java (and to a lesser extent, Python) programming and slowly feeling my CS education slip away, I decided I wanted to broaden my horizons / general usefulness to the world and do something that feels more (to me) like I really have an influence over the machine. I chose to learn C and Unix programming since I feel like that is where many of the most interesting problems are. My end goal is to be able to do this professionally, if for no other reason than the fact that I have to spend 40-50 hours per week on work that pays the bills, so it may as well also be the type of coding I want to get better at. Of course, you don't get hired to do things you haven't dont before, so for now I am ramping up on my own. To this end, I started with K&R, which was a great resource in part due to the exercises spread throughout each chapter. After that I moved on to Computer Systems: A Programmer's Perspective, followed by ten chapters of Advanced Programming in the Unix Environment. When I am done with this book, I will read Unix Network Programming. What I'm missing in the Stevens books is the lack of programming problems; they mainly document functionality and provide examples, with a few end-of-chapter questions following. I feel that I would benefit much more from being challenged to use the knowledge in each chapter ala K&R. I could write some test program for each function, but this is a less desirable method as (1) I would probably be less motivated than if I were rising to some external challenge, and (2) I will naturally only think to use the function in the ways that have already occurred to me. So, I'd like to get some recommendations on how to practice. Obviously, my first choice would be to find some resource that has Unix programming challenges. I have also considered finding and attempting to contribute to some open source C project, but this is a bit daunting as there would be some overhead in learning to use the software, then learning the codebase. The only open-source C project I can think of that I use regularly is Python, and I'm not sure how easy that would be to get started on. That said, I'm open to all kinds of suggestions as there are likely things I haven't even thought of.

    Read the article

  • Class member functions instantiated by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched Stack Overflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instantiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits that are tested at runtime. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state information was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. The compiler (VC++ 2008) always complained that things didn't match. I would yell, "SFINAE, you moron!" but the moron is probably me. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice? UPDATE: Here's another try at explaining it. I want the user to be able to fill out an order (manifest) for a custom optimizer, something like ordering off of a Chinese menu - one from column A, one from column B, etc.. Waiter, from column A (updaters), I'll have the BFGS update with Cholesky-decompositon sauce. From column B (line-searchers), I'll have the cubic interpolation line-search with an eta of 0.4 and a rho of 1e-4, please. Etc... UPDATE: Okay, okay. Here's the playing-around that I've done. I offer it reluctantly, because I suspect it's a completely wrong-headed approach. It runs okay under vc++ 2008. #include <boost/utility.hpp> #include <boost/type_traits/integral_constant.hpp> namespace dj { struct CBFGS { void bar() {printf("CBFGS::bar %d\n", data);} CBFGS(): data(1234){} int data; }; template<class T> struct is_CBFGS: boost::false_type{}; template<> struct is_CBFGS<CBFGS>: boost::true_type{}; struct LMQN {LMQN(): data(54.321){} void bar() {printf("LMQN::bar %lf\n", data);} double data; }; template<class T> struct is_LMQN: boost::false_type{}; template<> struct is_LMQN<LMQN> : boost::true_type{}; struct default_optimizer_traits { typedef CBFGS update_type; }; template<class traits> class Optimizer; template<class traits> void foo(typename boost::enable_if<is_LMQN<typename traits::update_type>, Optimizer<traits> >::type& self) { printf(" LMQN %lf\n", self.data); } template<class traits> void foo(typename boost::enable_if<is_CBFGS<typename traits::update_type>, Optimizer<traits> >::type& self) { printf("CBFGS %d\n", self.data); } template<class traits = default_optimizer_traits> class Optimizer{ friend typename traits::update_type; //friend void dj::foo<traits>(typename Optimizer<traits> & self); // How? public: //void foo(void); // How??? void foo() { dj::foo<traits>(*this); } void bar() { data.bar(); } //protected: // How? typedef typename traits::update_type update_type; update_type data; }; } // namespace dj int main_() { dj::Optimizer<> opt; opt.foo(); opt.bar(); std::getchar(); return 0; }

    Read the article

  • Personal Project - Next practical language/tech to learn

    - by Paul Nathan
    I'm working on a personal project doing some finance analysis. It's a totally new field for me, and I'm really having fun with it so far, plus working in the high-level language arena is a great break from my embedded systems daytime work. I have a MySQL backend on a non-local server with a pile of stock data. My task now is to do some analysis of the stocks and produce something approximating a useful result. There are a couple technical difficulties. (1) I have a lot of records. To be precise, I believe I'm near 100K records right now, and this number grows by 6.1K each weekday. I need to create a way to rummage through these fields and do data analysis - based on a given computation, go look at this other set. Fine and dandy, nothing too outre. But this means I could really use a straightforward API for talking to MySQL. (2) Ideally, it runs on OS X 10.4.11. No Windows/Linux machine at home. (3) I can use PHP, C++, Perl, etc. I even have an R installation. I'm pretty flexible with stuff, so long as it runs on OS X. (Lots of options here, pick water, H20, or dihydrogen monoxide ;-) ) (4)Lack of hassle. While I like clever and fun ways of doing things, I'm trying to get some analysis done, not spend ten hours doing installation work and scratching my head figuring out a theoretical syntax question needed to spout out "hello world". What's the question? I'd like to dig into something different than my usual PHP/C++/C toolset. I'm looking for recommendations for languages/technologies that will assist me and meet the above requirements. In particular, I've heard a lot of buzz about F# and Python on SO. I've used CLISP for small problems before, and kinda liked it. I'm seeking opinions about those in particular. edit:since I rent the DB server and have a limited amount of CPU time online, I'm trying to do the analysis on a local machine.

    Read the article

  • Amusing or Sad? Network Solutions

    - by dbasnett
    When I got sick my email ended up in every drug sellers email list. Some days I get over 200 emails selling everything from Viagra to Xanax. Either they don't know what my condition is or they are telling me you are a goner, might as well chill-ax and have a good time. In order to cut down on the mail being downloaded I thought I would add all of the Junk email senders from Outlook to my Network Solution mail server. Much to my amazement I could not find that import Spammers button, so I submitted a tech support request. Here is the response: Thank you for contacting Network Solutions Customer Service Department. We are committed to creating the best Customer experience possible. One of the first ways we can demonstrate our commitment to this goal is to quickly and efficiently handle your recent request. We apologize for any inconvenience this might have caused you. With regard to your concern, please be advised that we cannot import blocked senders in to you e-mail servers. An alternative option is for you to create a Custom Filter that filters unwanted e-mails. To create a Custom Filter: Open a Web browser (e.g., Netscape, Microsoft Internet Explorer, etc.). Type mail.[domain name].[ext] in the address line. Login to your Network Solutions email account. Click on the Configuration left menu tab. Click on the Custom Filter link. Type the rule name. blah, blah, blah Basically add them one at a time. "We are committed to creating the best Customer experience possible." No you are not. You are trying to squeeze every nickle you can out of me. "With regard to your concern, please be advised that we cannot import blocked senders in to you e-mail servers." Maybe I should apply for a job to write those ten complicated lines of code... Maybe I should question my choice of vendors, because if they truly "cannot" then they are to stupid to have my business. It is both amusing and sad. I'll be posting this in every forum I am a member of.

    Read the article

  • Vietnamese character in .NET Console Application (UTF-8)

    - by DucDigital
    Im trying to write down an UTF8 string (Vietnamese) into C# Console but no success. Im running on windows 7. I tried to use the Encoding class that convert string to char[] to byte[] and then to String, but no help, the string is input directly fron the database. Here is some example Tôi tên là Ð?c, cu?c s?ng th?t vui v? tuy?t v?i It does not show the special character like : Ð or ?... instead it show up ?, much worse with the Encoding class. Does anyone can try this out or know about this problem? Thank you My code static void Main(string[] args) { XDataContext _new = new XDataContext(); Console.OutputEncoding = Encoding.GetEncoding("UTF-8"); string srcString = _new.Posts.First().TITLE; Console.WriteLine(srcString); // Convert the UTF-16 encoded source string to UTF-8 and ASCII. byte[] utf8String = Encoding.UTF8.GetBytes(srcString); byte[] asciiString = Encoding.ASCII.GetBytes(srcString); // Write the UTF-8 and ASCII encoded byte arrays. Console.WriteLine("UTF-8 Bytes: {0}", BitConverter.ToString(utf8String)); Console.WriteLine("ASCII Bytes: {0}", BitConverter.ToString(asciiString)); // Convert UTF-8 and ASCII encoded bytes back to UTF-16 encoded // string and write. Console.WriteLine("UTF-8 Text : {0}", Encoding.UTF8.GetString(utf8String)); Console.WriteLine("ASCII Text : {0}", Encoding.ASCII.GetString(asciiString)); Console.WriteLine(Encoding.UTF8.GetString(utf8String)); Console.WriteLine(Encoding.ASCII.GetString(asciiString)); } and here is the outstanding output Nhà báo Ä‘i há»™i báo Xuân UTF-8 Bytes: 4E-68-C3-A0-20-62-C3-A1-6F-20-C4-91-69-20-68-E1-BB-99-69-20-62-C3- A1-6F-20-58-75-C3-A2-6E ASCII Bytes: 4E-68-3F-20-62-3F-6F-20-3F-69-20-68-3F-69-20-62-3F-6F-20-58-75-3F- 6E UTF-8 Text : Nhà báo Ä‘i há»™i báo Xuân ASCII Text : Nh? b?o ?i h?i b?o Xu?n Nhà báo Ä‘i há»™i báo Xuân Nh? b?o ?i h?i b?o Xu?n Press any key to continue . . .

    Read the article

  • How do I get a screenshot of a given website using C#

    - by Ender
    I'm writing a specialised crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • Getting a screenshot of a page using C# - Need help with code

    - by Ender
    I'm writing a specialised crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; // Code edited from example below to make smaller bitmap and save as PNG using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • ASP.NET site sometimes freezing up and/or showing odd text at top of the page while loading, on load

    - by MGOwen
    I have various servers (dev, 2 x test, 2 x prod) running the same asp.net site. The test and prod servers are in load-balanced pairs (prod1 with prod2, and test1 with test2). The test server pair is exhibiting some kind of (super) slowdown or freezing during about one in ten page loads. Sometimes a line of text appears at the very top of the page which looks something like: 00 OK Date: Thu, 01 Apr 2010 01:50:09 GMT Server: Microsoft-IIS/6.0 X-Powered_By: ASP.NET X-AspNet-Version:2.0.50727 Cache-Control:private Content-Type:text/html; charset=ut (the beginning and end are "cut off".) Has anyone seen anything like this before? Any idea what it means or what's causing it? Edit: I often see this too when clicking something - it comes up as red text on a yellow page: XML Parsing Error: not well-formed Location: http://203.111.46.211/3DSS/CompanyCompliance.aspx?cid=14 Line Number 1, Column 24:2mMTehON9OUNKySVaJ3ROpN" / -----------------------^ If I go back and click again, it works (I see the page I clicked on, not the above error message). Update: ...And, instead of the page loading, I sometimes just get a white screen with text like this in black (looks a lot like the above text): HTTP/1.1 302 Found Date: Wed, 21 Apr 2010 04:53:39 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET X-AspNet-Version: 2.0.50727 Location: /3DSS/EditSections.aspx?id=3&siteId=56&sectionId=46 Set-Cookie: .3DSS=A6CAC223D0F2517D77C7C68EEF069ABA85E9392E93417FFA4209E2621B8DCE38174AD699C9F0221D30D49E108CAB8A828408CF214549A949501DAFAF59F080375A50162361E4AA94E08874BF0945B2EF; path=/; HttpOnly Cache-Control: private Content-Type: text/html; charset=utf-8 Content-Length: 184 object moved here Where "here" is a link that points to a URL just like the one I'm requesting, except with an extra folder in it, meaning something like: http://123.1.2.3/MySite//MySite/Page.aspx?option=1 instead of: http://123.1.2.3/MySite/Page.aspx?option=1 Update: A colleague of mine found some info saying it might be because the test servers are running iis in 64 bit (64bit win 2003) (prod servers are 32 bit win 2003). So we tried telling IIS to use 32 bit: **cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 1 %SYSTEMROOT%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis.exe -i ** (from this MS support page) But iis stopped working altogether (got "server unavailable" on a white page instead of web sites). Reversing the above (see the link) didn't work at first either. The ASP.NET tab disappeared from our IIS web site properties and we had to mess around for an hour uninstalling (aspnet_regiis.exe -u) and reinstalling 32 bit ASP.NET and adding Default.aspx manually back into default documents. We'll probably try again in a few days, if anyone has anything to add in the meantime, please do.

    Read the article

  • get_or_create generic relations in Django & python debugging in general

    - by rabidpebble
    I ran the code to create the generically related objects from this demo: http://www.djangoproject.com/documentation/models/generic_relations/ Everything is good intially: >>> bacon.tags.create(tag="fatty") <TaggedItem: fatty> >>> tag, newtag = bacon.tags.get_or_create(tag="fatty") >>> tag <TaggedItem: fatty> >>> newtag False But then the use case that I'm interested in for my app: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome") Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/manager.py", line 123, in get_or_create return self.get_query_set().get_or_create(**kwargs) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 343, in get_or_create raise e IntegrityError: app_taggeditem.content_type_id may not be NULL I tried a bunch of random things after looking at other code: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem) ValueError: Cannot assign "<class 'generics.app.models.TaggedItem'>": "TaggedItem.content_type" must be a "ContentType" instance. or: >>> tag, newtag = bacon.tags.get_or_create(tag="wholesome", content_type=TaggedItem.content_type) InterfaceError: Error binding parameter 3 - probably unsupported type. etc. I'm sure somebody can give me the correct syntax, but the real problem here is that I have no idea what is going on. I have developed in strongly typed languages for over ten years (x86 assembly, C++ and C#) but am new to Python. I find it really difficult to follow what is going on in Python when things like this break. In the languages I mentioned previously it's fairly straightforward to figure things like this out -- check the method signature and check your parameters. Looking at the Django documentation for half an hour left me just as lost. Looking at the source for get_or_create(self, **kwargs) didn't help either since there is no method signature and the code appears very generic. A next step would be to debug the method and try to figure out what is happening, but this seems a bit extreme... I seem to be missing some fundamental operating principle here... what is it? How do I resolve issues like this on my own in the future?

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • Getting a screenshot of a page using .NET - Need help with code

    - by Ender
    I'm writing a specialized crawler and parser for internal use and I require the ability to take a screenshot of a web page in order to check what colours are being used throughout. The program will take in around ten web addresses and will save them as a bitmap image, from there I plan to use LockBits in order to create a list of the five most used colours within the image. To my knowledge it's the easiest way to get the colours used within a web page but if there is an easier way to do it please chime in with your suggestions. Anyway, I was going to use this program until I saw the price tag. I'm also fairly new to C#, having only used it for a few months. Can anyone provide me with a solution to my problem of taking a screenshot of a web page in order to extract the colour scheme? EDIT: Sorry for not getting back to this sooner, but I've been busy with some other things. Anyway, the code seems to work well, but the problem I am having right now is that I am running it within a form, and naturally with Application.Run() being called I cannot run two instances of the same form at once. It recommended Form.showDialog() but that broke everything. Can anyone give me a hand with this code? public static void buildScreenshotFromURL(string url) { int width = 800; int height = 600; using (WebBrowser browser = new WebBrowser()) { browser.Width = width; browser.Height = height; browser.ScrollBarsEnabled = true; // This will be called when the page finishes loading browser.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(OnDocumentCompleted); //browser.DocumentCompleted += OnDocumentCompleted; browser.Navigate(url); // This prevents the application from exiting until // Application.Exit is called // Application.Run() does not work as it cannot be called twice, recommended form.showDialog() // but still issues Application.Run(); } } public static void OnDocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e) { // Define size of thumbnail neded int thumbSize = 50; // Now that the page is loaded, save it to a bitmap WebBrowser browser = (WebBrowser)sender; // Code edited from example below to make smaller bitmap and save as PNG using (Graphics graphics = browser.CreateGraphics()) { using (Bitmap bitmap = new Bitmap(browser.Width, browser.Height, graphics)) { Rectangle bounds = new Rectangle(0, 0, bitmap.Width, bitmap.Height); browser.DrawToBitmap(bitmap, bounds); Bitmap thumbBitmap = new Bitmap(bitmap.GetThumbnailImage(thumbSize, thumbSize, thumbCall, IntPtr.Zero)); thumbBitmap.Save("screenshot.png", ImageFormat.Png); handleImage(thumbBitmap); } }

    Read the article

  • Moving a unit precisely along a path in x,y coordinates

    - by Adam Eberbach
    I am playing around with a strategy game where squads move around a map. Each turn a certain amount of movement is allocated to a squad and if the squad has a destination the points are applied each turn until the destination is reached. Actual distance is used so if a squad moves one position in the x or y direction it uses one point, but moving diagonally takes ~1.4 points. The squad maintains actual position as float which is then rounded to allow drawing the position on the map. The path is described by touching the squad and dragging to the end position then lifting the pen or finger. (I'm doing this on an iPhone now but Android/Qt/Windows Mobile would work the same) As the pointer moves x, y points are recorded so that the squad gains a list of intermediate destinations on the way to the final destination. I'm finding that the destinations are not evenly spaced but can be further apart depending on the speed of the pointer movement. Following the path is important because obstacles or terrain matter in this game. I'm not trying to remake Flight Control but that's a similar mechanic. Here's what I've been doing, but it just seems too complicated (pseudocode): getDestination() { - self.nextDestination = remove_from_array(destinations) - self.gradient = delta y to destination / delta x to destination - self.angle = atan(self.gradient) - self.cosAngle = cos(self.angle) - self.sinAngle = sin(self.angle) } move() { - get movement allocation for this turn - if self.nextDestination not valid - - getNextDestination() - while(nextDestination valid) && (movement allocation remains) { - - find xStep and yStep using movement allocation and sinAngle/cosAngle calculated for current self.nextDestination - - if current position + xStep crosses the destination - - - find x movement remaining after self.nextDestination reached - - - calculate remaining direct path movement allocation (xStep remaining / cosAngle) - - - make self.position equal to self.nextDestination - - else - - - apply xStep and yStep to current position - } - round squad's float coordinates to integer screen coordinates - draw squad image on map } That's simplified of course, stuff like sign needs to be tweaked to ensure movement is in the right direction. If trig is the best way to do it then lookup tables can be used or maybe it doesn't matter on modern devices like it used to. Suggestions for a better way to do it? an update - iPhone has zero issues with trig and tracking tens of positions and tracks implemented as described above and it draws in floats anyway. The Bresenham method is more efficient, trig is more precise. If I was to use integer Bresenham I would want to multiply by ten or so to maintain a little more positional accuracy to benefit collisions/terrain detection.

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • jquery code not working in chrome

    - by Paul
    This code works is FF and IE but not in Chrome. Any help would be greatly appreciated. Thank You! here is the html Item One Item Two Item Three Item Four Item Five Item Six Item Seven Item Eight Item Nine Item ten here is the css popularsearches { border-bottom: 1px solid #D4D4D4; border-left: 1px solid #D4D4D4; border-right: 1px solid #D4D4D4; overflow:hidden; height: 130px; width:248px; margin-bottom:20px; } popularsearches ul { padding:0 5px 0 0; margin:0; } popularsearches ul li { list-style-type:none; list-style-position:inside; border-bottom: solid 1px #D4D4D4; font-size:14px; padding:3px 0 3px 0; margin:0 0 0 10px; text-align:left; } popularsearches ul li a { text-decoration:none; } popularsearches ul li a:hover, a:link, a:visited { text-decoration:none; } popularsearches-inside { width: 500px; } popularsearches-left { float:left; width:250px; height:100px; } popularsearches-right { float:left; width:250px; height:100px; } here is the jQuery var closeinterval = 0; function scrollContent() { //Toggle left between 250 and 0 var top = jQuery("#popularsearches").scrollLeft() == 0 ? 250 : 0; jQuery("#popularsearches").animate({ scrollLeft: top }, "slow"); } // Call scrollContent function every 6 secs closeinterval = setInterval("scrollContent()", 6000); jQuery(document).ready(function() { jQuery("#popular-button-left").bind("click", function() { if (closeinterval) { window.clearInterval(closeinterval); closeinterval = null; } jQuery("#popularsearches").animate({ scrollLeft: 0 }, 1000); }); jQuery("#popular-button-right").bind("click", function() { if (closeinterval) { window.clearInterval(closeinterval) closeinterval = null; } jQuery("#popularsearches").animate({ scrollLeft: 250 }, 1000); }); });

    Read the article

  • HTTP Post requests using HttpClient take 2 seconds, why?

    - by pableu
    Update: You might better hold off this for a bit, I just noticed I could be my fault after all. Working on this all afternoon, and then I find a flaw ten minutes after posting here, ts. Hi, I'am currently coding an android app that submits stuff in the background using HTTP Post and AsyncTask. I use the org.apache.http.client Package for this. I based my code on this example. Basically, my code looks like this: public void postData() { // Create a new HttpClient and Post Header HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://192.168.1.137:8880/form"); try { List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(2); nameValuePairs.add(new BasicNameValuePair("id", "12345")); nameValuePairs.add(new BasicNameValuePair("stringdata", "AndDev is Cool!")); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); // Execute HTTP Post Request HttpResponse response = httpclient.execute(httppost); } catch (ClientProtocolException e) { Log.e(TAG,e.toString()); } catch (IOException e) { Log.e(TAG,e.toString()); } } The problem is that the httpclient.execute(..) line takes around 1.5 to 3 seconds, and I do not understand why. Just requesting a page with HTTP Get takes around 80 ms or so, so the problem doesn't seem to be the network latency itself. The problem doesn't seem to be on the server side either, I have also tried POSTing data to http://www.disney.com/ with similarly slow results. And Firebug shows 1 ms response time when POSTing data to my server locally. This happens on the Emulator and with my Nexus One (both with Android 2.2). If you want to look at the complete code, I've put it on GitHub. It's just a dummy program to do HTTP Post in the background using AsyncTask on the push of a button. It's my first Android app, and my first java code for a long time. And incidentially, also my first question on Stackoverflow ;-) Any ideas why httpclient.execute(httppost) takes so long?

    Read the article

  • Server Error Message: No File Access

    - by iMayne
    Hello. Im having an issues but dont know where to solve it. My template works great in xampp but not on the host server. I get this message: Warning: file_get_contents() [function.file-get-contents]: URL file-access is disables in the server configuration in homepage/......./twitter.php. The error is on line 64. <?php /* For use in the "Parse Twitter Feeds" code below */ define("SECOND", 1); define("MINUTE", 60 * SECOND); define("HOUR", 60 * MINUTE); define("DAY", 24 * HOUR); define("MONTH", 30 * DAY); function relativeTime($time) { $delta = time() - $time; if ($delta < 2 * MINUTE) { return "1 min ago"; } if ($delta < 45 * MINUTE) { return floor($delta / MINUTE) . " min ago"; } if ($delta < 90 * MINUTE) { return "1 hour ago"; } if ($delta < 24 * HOUR) { return floor($delta / HOUR) . " hours ago"; } if ($delta < 48 * HOUR) { return "yesterday"; } if ($delta < 30 * DAY) { return floor($delta / DAY) . " days ago"; } if ($delta < 12 * MONTH) { $months = floor($delta / DAY / 30); return $months <= 1 ? "1 month ago" : $months . " months ago"; } else { $years = floor($delta / DAY / 365); return $years <= 1 ? "1 year ago" : $years . " years ago"; } } /* Parse Twitter Feeds */ function parse_cache_feed($usernames, $limit, $type) { $username_for_feed = str_replace(" ", "+OR+from%3A", $usernames); $feed = "http://twitter.com/statuses/user_timeline.atom?screen_name=" . $username_for_feed . "&count=" . $limit; $usernames_for_file = str_replace(" ", "-", $usernames); $cache_file = dirname(__FILE__).'/cache/' . $usernames_for_file . '-twitter-cache-' . $type; if (file_exists($cache_file)) { $last = filemtime($cache_file); } $now = time(); $interval = 600; // ten minutes // check the cache file if ( !$last || (( $now - $last ) > $interval) ) { // cache file doesn't exist, or is old, so refresh it $cache_rss = file_get_contents($feed); (this is line 64) Any help on how to give this access on my host server?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43  | Next Page >