Search Results

Search found 2210 results on 89 pages for 'sum'.

Page 82/89 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Application Composer: Exposing Your Customizations in BI Analytics and Reporting

    - by Richard Bingham
    Introduction This article explains in simple terms how to ensure the customizations and extensions you have made to your Fusion Applications are available for use in reporting and analytics. It also includes four embedded demo videos from our YouTube channel (if they don't appear check the browser address bar for a blocking shield icon). If you are new to Business Intelligence consider first reviewing our getting started article, and you can read more about the topic of custom subject areas in the documentation book Extending Sales. There are essentially four sections to this post. First we look at how custom fields added to standard objects are made available for reporting. Secondly we look at creating custom subject areas on the standard objects. Next we consider reporting on custom objects, starting with simple standalone objects, then child custom objects, and finally custom objects with relationships. Finally this article reviews how flexfields are exposed for reporting. Whilst this article applies to both Cloud/SaaS and on-premises deployments, if you are an on-premises developer then you can also use the BI Administration Tool to customize your BI metadata repository (the RPD) and create new subject areas. Whilst this is not covered here you can read more in Chapter 8 of the Extensibility Guide for Developers. Custom Fields on Standard Objects If you add a custom field to your standard object then it's likely you'll want to include it in your reports. This is very simple, since all new fields are instantly available in the "[objectName] Extension" folder in existing subject areas. The following two minute video demonstrates this. Custom Subject Areas for Standard Objects You can create your own subject areas for use in analytics and reporting via Application Composer. An example use-case could be to simplify the seeded subject areas, since they sometimes contain complex data fields and internal values that could confuse business users. One thing to note is that you cannot create subject areas in a sandbox, as it is not supported by BI, so once your custom object is tested and complete you'll need to publish the sandbox before moving forwards. The subject area creation processes is essentially two-fold. Once the request is submitted the ADF artifacts are generated, then secondly the related metadata is sent to the BI presentation server API's to make the updates there. One thing to note is that this second step may take up to ten minutes to complete. Once finished the status of the custom subject area request should show as 'OK' and it is then ready for use. Within the creation processes wizard-like steps there are three concepts worth highlighting: Date Flattening - this feature permits the roll up of reports at various date levels, such as data by week, month, quarter, or year. You simply check the box to enable it for that date field. Measures - these are your own functions that you can build into the custom subject area. They are related to the field data type and include min-max for dates, and sum(), avg(), and count() for  numeric fields. Implicit Facts - used to make the BI metadata join between your object fields and the calculated measure fields. The advice is to choose the most frequently used measure to ensure consistency. This video shows a simple example, where a simplified subject area is created for the customer 'Contact' standard object, picking just a few fields upon which users can then create reports. Custom Objects Custom subject areas support three types of custom objects. First is a simple standalone custom object and for which the same process mentioned above applies. The next is a custom child object created on a standard object parent, and finally a custom object that is related to a parent object - usually through a dynamic choice list. Whilst the steps in each of these last two are mostly the same, there are differences in the way you choose the objects and their fields. This is illustrated in the videos below.The first video shows the process for creating a custom subject area for a simple standalone custom object. This second video demonstrates how to create custom subject areas for custom objects that are of parent:child type, as well as those those with dynamic-choice-list relationships. &lt;span id=&quot;XinhaEditingPostion&quot;&gt;&lt;/span&gt; Flexfields Dynamic and Extensible Flexfields satisfy a similar requirement as custom fields (for Application Composer), with flexfields common across the Fusion Financials, Supply Chain and Procurement, and HCM applications. The basic principle is when you enable and configure your flexfields, in the edit page under each segment region (for both global and context segments) there is a BI Enabled check box. Once this is checked and you've completed your configuration, you run the Scheduled Process job named 'Import Oracle Fusion Data Extensions for Transactional Business Intelligence' to generate and migrate the related BI artifacts and data. This applies for dynamic, key, and extensible flexfields. Of course there is more to consider in terms of how you wish your flexfields to be implemented and exposed in your reports, and details are given in Chapter 4 of the Extending Applications guide.

    Read the article

  • Defining scope for Record Count functoid:

    - by ArunManick
    Defining scope for Record Count functoid: Problem: One of the most common scenarios in BizTalk is calculating the record count of repeating structure. BizTalk has come up with an advanced functoid called Record Count functoid which will give the record count for the repeating structure however you cannot define the scope for a Record Count functoid. Because Record Count functoid accepts exactly one parameter which can be repeating record or field element.   If somebody don’t know what “scope” means I will explain with a simple example. Consider that we have a source schema having a structure Country -> State -> City. Country will have various states and each state will have different cities. Now you want to calculate no. of cities present in each state. Here scope is defined at the parent node “State”. Traditional Record Count functoid will give the total no. of cities present in the source message and not the State level city count.   Source Schema:   Destination Schema:   Soultion #1: As the title indicates we are not going to add one more parameter to the record count functoid. Instead of that, we are going to achieve the solution with the help of Scripting functoid with Inline XSLT script. XSLT is basically the transformation language used in the mapping.     “No.OfCities” indicates the destination field name to which we are going to send the value. In count(City), “count” refers to built in XPath function used in XSLT and “City” refers to source schema record name. Here you can find the list of built-in functions available in XSLT.   The mapping will look like as follows:   The 2 Record Count functoids used in this map will give the total number of states and total number of cities as that of input message.   Soultion #2:  If someone doesn’t like XSLT code and they wish to achieve the solution using functoids alone, then here is another solution.   Use logical Existence functoid to check whether “City” exist or not Connect the output of Logical Existence functoid to the Value Mapping functoid with second parameter as constant “1”. Hence if the first parameter is TRUE it will give the output as “1”. Connect the output of Value Mapping functoid to the Cumulative Sum functoid with scope as “1”   This will calculate the City count at the state level. The mapping will look like as follows:     Let us see the sample input and the map output.   Input: <?xml version="1.0" encoding="utf-8"?> <ns0:Country xmlns:ns0="http://RecordCount.Source">   <State>     <StateName>Tamilnadu</StateName>     <City>       <CityName>Pollachi</CityName>     </City>     <City>       <CityName>Coimbatore</CityName>     </City>     <City>       <CityName>Chennai</CityName>     </City>   </State>   <State>     <StateName>Kerala</StateName>     <City>       <CityName>Palakad</CityName>     </City>   </State>   <State>     <StateName>Karnataka</StateName>     <City>       <CityName>Bangalore</CityName>     </City>     <City>       <CityName>Mangalore</CityName>     </City>   </State> </ns0:Country>     Output: <ns0:Country xmlns:ns0="http://RecordCount.Destination">           <No.OfStates>3</No.OfStates>           <No.OfCities>6</No.OfCities>           <States>                    <No.OfCities>3</No.OfCities>           </States>           <States>                    <No.OfCities>1</No.OfCities>           </States>           <States>                    <No.OfCities>2</No.OfCities>           </States> </ns0:Country>   Conclusion: This is my first post and I hope you enjoyed it.   -Arun

    Read the article

  • Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

    - by Ted McLaughlan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} All too often in the growth and maturation of Enterprise Architecture initiatives, the effort stalls or is delayed due to lack of “applied traction”. By this, I mean the EA activities - whether targeted towards compliance, risk mitigation or value opportunity propositions – may not be attached to measurable, active, visible projects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum, without collaborative engagement and a means of proving usefulness. A critical vehicle to this proof is successful orchestration and use of assets and investment resources to meet a high-profile business objective – i.e. a successful project. More and more organizations are now exploring and considering some degree of IT outsourcing, buying and using external services and solutions to deliver their IT and business requirements – vs. building and operating in-house, in their own data centers. The rapid growth and success of “Cloud” services makes some decisions easier and some IT projects more successful, while dramatically lowering IT risks and enabling rapid growth. This is particularly true for “Software as a Service” (SaaS) applications, which essentially are complete web applications hosted and delivered over the Internet. Whether SaaS solutions – or any kind of cloud solution - are actually, ultimately the most cost-effective approach truly depends on the organization’s business and IT investment strategy. This leads us to Enterprise Architecture, the connectivity between business strategy and investment objectives, and the capabilities purchased or created to meet them. If an EA framework already exists, the approach to selecting a cloud-based solution and integrating it with internal IT systems (i.e. a “Hybrid IT” solution) is well-served by leveraging EA methods. If an EA framework doesn’t exist, or is simply not mature enough to address complex, integrated IT objectives – a hybrid IT/cloud initiative is the perfect project to advance and prove the value of EA. Why is this? For starters, the success of any complex IT integration project - spanning multiple systems, contracts and organizations, public and private – depends on active collaboration and coordination among the project stakeholders. For a hybrid IT initiative, inclusive of one or more cloud services providers, the IT services, business workflow and data governance challenges alone can be extremely complex, requiring many diverse layers of organizational expertise and authority. Establishing subject matter expertise, authorities and strategic guidance across all the disciplines involved in a hybrid-IT or hybrid-cloud system requires top-level, comprehensive experience and collaborative leadership. Tools and practices reflecting industry expertise and EA alignment can also be very helpful – such as Oracle’s “Cloud Candidate Selection Tool”. Using tools like this, and facilitating this critical collaboration by leading, organizing and coordinating the input and expertise into a shared, referenceable, reusable set of authority models and practices – this is where EA shines, and where Enterprise Architects can be most valuable. The “enterprise”, in this case, becomes something greater than the core organization – it includes internal systems, public cloud services, 3rd-party IT platforms and datacenters, distributed users and devices; a whole greater than the sum of its parts. Through facilitated project collaboration, leading to identification or creation of solid governance models and processes, a durable and useful Enterprise Architecture framework will usually emerge by itself, if not actually identified and managed as such. The transition from planning collaboration to actual coordination, where the program plan, schedule and resources become synchronized and aligned to other investments in the organization portfolio, is where EA methods and artifacts appear and become most useful. The actual scope and use of these artifacts, in the context of this project, can then set the stage for the most desirable, helpful and pragmatic form of the now-maturing EA framework and community of practice. Considering or starting a hybrid-IT or hybrid-cloud initiative? Running into some complex relationship challenges? This is the perfect time to take advantage of your new, growing or possibly latent Enterprise Architecture practice.

    Read the article

  • How to fix Ubuntu 12.04.3 boot to black screen full of errors in white text, after upgrading on dell inspiron 1501

    - by Ibuntu
    I am running a Dell Inspiron 1501 I use Linux only. No Microsoft or Apple operating systems (or really anything closed-source). I've only been using Linux for a little over a year but I'm starting to gain a comfortable level of familiarity with the system and terminology. I've been having some issues with Quantel Quetzal and Raring Ringtail, especially with older hardware, so I opted to install Ubuntu 12.04.3 Precise Pangolin on the Inspiron 1501. I checked my MD5 sum after downloading my ISO and all was good. I have in fact used this iso/dvd to install Precise Pangolin successfully on a few other systems (some of which are even older than this laptop). Install goes fine. The wireless card doesn't work out of the box but this is a known issue which is fairly easy to fix. So, first thing I did was open up a terminal and run sudo apt-get update && sudo apt-get upgrade which, part way through, crashed (I assume lightdm and possibly X) and took me to a black screen filled with white lines of text that were either errors or just the ouputs of commands. The reason I say that is because I was unable to gleam any useful information from the output on the screen. I did take a picture however and will post a link. After that, every time I boot the system it goes right to that black screen posting all the error messages or output in white text. I never get a purple Ubuntu splash, so from what I can tell after reading this wiki article: https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen That means that after the kernel is selected, it is unable to correctly implement the settings it needs. If the purple splash never shows, the frame buffer was never set correctly right? This leads me to believe that it could be a kernel issue? The wiki suggested to try and pinpoint the issue by rolling back kernels until I find one that works. Is this my best option? I think I'm going to give it a try anyways and will let everyone know if I am able to solve the issue this way. I have since done a few reinstalls and some trouble-shooting including a couple hours scouring the net for anyone with any kind of similar issue. Most of the issues I could find involved getting a black screen after login and none of them said anything about any information output on this black screen. My reinstalls have taught me that there is no issue updating, but as soon as I run sudo apt-get upgrade my system goes to the black screen and every time I boot it up it does the same thing. The only way to fix is by reinstall. I never get any ability to log in. After a hard power off to the laptop (because I cannot use ctrl+alt+del to reboot) when it boots again it goes to the grub boot menu and I can select between regular boot, recovery mode and the two memtest options. I never tried the memtest options but the other two both lead to the same black screen. Some people having a black/blank screen issue claim to have fixed it by using 12.10 or 13.04 but I believe they were having a different issue where they got a black/blank screen after logging in. I think I will still give these images a try, but mostly figured I would just wait another day or two for 13.10. Other things I figured I would try from the following three articles: After logging in, there's a black screen and my cursor, nothing else! in Ubuntu 12.10 Black Screen on Login After Upgrading to 12.04 I can't get to the login screen include opening a terminal using ctrl+alt+f1 and trying a variety of reseting unity, x settings, lightdm (or switching to gdm); but I doubt this will work or that I will even be able to access a terminal. I'm pretty sure the whole system is stuck after it loads the last line on the black screen. I will try these things and post more information when I have. Hopefully someone has an idea in the meantime and I will keep checking back trying to find a solution. Thank you. Here are 3 different pictures of the error message. I had to take with my phone: http://ubuntuone.com/album/0TBBkxmVajJIQQtoN9mVdN

    Read the article

  • Styling ASP.NET MVC Error Messages

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/styling-asp.net-mvc-error-messages.aspxOff the cuff, it may look like you’re stuck with the presentation of your error messages (model errors) in ASP.NET MVC. That’s not the case, though. You actually have quite a number of options with regard to styling those boogers. Like many of the helpers in MVC, the Html.ValidationMessageFor helper has multiple prototypes. One of those prototypes lets you pass a dictionary, or anonymous object, representing attribute values for the resulting markup. @Html.ValidationMessageFor( m => Model.Whatever, null, new { @class = “my-error” }) By passing the htmlAttributes parameter, which is the last parameter in the call to the prototype of Html.ValidationMessageFor shown above, I can style the resulting markup by associating styles to the my-error css class.  When you run your MVC project and view the source, you’ll notice that MVC adds the class field-validation-valid or field-validation-error to a span created by the helper. You could actually just style those classes instead of adding your own…it’s really up to you. Now, what if you wanted to move that error message around? Maybe you want to put that error message in a box or a callout. How do you do that? When I first started using MVC, it didn’t occur to me that the Html.ValidationMessageFor helper just spits out a little bit of markup. I wanted to put the error messages in boxes with white backgrounds, our site originally had a black background, and show a little nib on the side to make them look like callouts or conversation bubbles. Not realizing how much freedom there is in the styling and markup, and after reading someone else’s post, I created my own version of the ValidationMessageFor helper that took out the span and replaced it with divs. I styled the divs to produce the effect of a popup box and had a lot of trouble with sizing and such. That’s a really silly and unnecessary way to solve this problem. If you want to move your error messages around, all you have to do is move the helper. MVC doesn’t appear to care where you put it, which makes total sense when you think about it. Html.ValidationMessageFor is just spitting out a little markup using a little bit of reflection on the name you’re passing it. All you’ve got to do to style it the way you want it is to put it in whatever markup you desire. Take a look at this, for example… <div class=”my-anchor”>@Html.ValidationMessageFor( m => Model.Whatever )</div> @Html.TextBoxFor(m => Model.Whatever) Now, given that bit of HTML, consider the following CSS… <style> .my-anchor { position:relative; } .field-validation-error {    background-color:white;    border-radius:4px;    border: solid 1px #333;    display: block;    position: absolute;    top:0; right:0; left:0;    text-align:right; } </style> The my-anchor class establishes an anchor for the absolutely positioned error message. Now you can move the error message wherever you want it relative to the anchor. Using css3, there are some other tricks. For example, you can use the :not(:empty) selector to select the span and apply styles based upon whether or not the span has text in it. Keep it simple, though. Moving your elements around using absolute positioning may cause you issues on devices with screens smaller than your standard laptop or PC. While looking for something else recently, I saw someone asking how to style the output for Html.ValidationSummary.  Html.ValidationSummery is the helper that will spit out a list of property errors, general model errors, or both. Html.ValidationSummary spits out fairly simple markup as well, so you can use the techniques described above with it also. The resulting markup is a <ul><li></li></ul> unordered list of error messages that carries the class validation-summary-errors In the forum question, the user was asking how to hide the error summary when there are no errors. Their errors were in a red box and they didn’t want to show an empty red box when there aren’t any errors. Obviously, you can use the css3 selectors to apply different styles to the list when it’s empty and when it’s not empty; however, that’s not support in all browsers. Well, it just so happens that the unordered list carries the style validation-summary-valid when the list is empty. While the div rendered by the Html.ValidationSummary helper renders a visible div, containing one invisible listitem, you can always just style the whole div with “display:none” when the validation-summary-valid class is applied and make it visible when the validation-summary-errors class is applied. Or, if you don’t like that solution, which I like quite well, you can also check the model state for errors with something like this… int errors = ViewData.ModelState.Sum(ms => ms.Value.Errors.Count); That’ll give you a count of the errors that have been added to ModelState. You can check that and conditionally include markup in your page if you want to. The choice is yours. Obviously, doing most everything you can with styles increases the flexibility of the presentation of your solution, so I recommend going that route when you can. That picture of the fat guy jumping has nothing to do with the article. That’s just a picture of me on the roof and I thought it was funny. Doesn’t every post need a picture?

    Read the article

  • Answers to Conference Revenue Tweet Questions

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2014/05/27/156612.aspxI tweeted this the other day… …and I had some people tweet back questioning/asking about the profit number. So here’s how I came to that figure. Total Revenue Let’s talk total revenue first. This conference has a huge list of companies/organizations paying some amount for sponsorship. Platinum ($1500) x 5 = $7500 Gold ($1000) x 3 = $3000 Silver ($500) x 9 = $4500 Bronze ($250) x 13 = $3250 There’s also a title sponsor level but there’s no mention of how much that is…more than $1500 though, so let’s just say $2500. Total Sponsorship Revenue: $20750.00 For registrations, this conference is claiming over 300 attendees. We’ll just calculate at 300 and the discounted “member rate” – $249. Total Registration Revenue: $74700.00 Booth space is also sold for a vendor area, but let’s just leave that out of the calculation. Total Event Revenue: $95450.00 Now that we know how much money we’re playing with, let’s knock out the costs for the event. Total Costs Hard Costs Audio/Visual Services $2000 Conference Rooms (4 Breakouts + Plenary) $2500 Insurance $700 Printing/Signage $1500 Travel/Hotel Rooms $2000 Keynotes $2000 So let’s talk about these hard costs first. First you may be asking about the Audio Visual. Yes those services can be that high, actually higher. But since there’s an A/V company touted as the official A/V provider, I gotta think there’s some discount for being branded as such. Conference rooms are actually an inflated amount of $500 per. Venues make money on the food they sell at events, not on room rentals. The more food, the cheaper the rooms tend to be offered at. Still, for the sake of argument, let’s set the rooms at $500 each knowing that they could be lower. For travel and hotel rooms…it appears that most of the speakers at this conference are local, meaning there’s no travel or hotel cost. But a few of them I wasn’t too sure…so let’s factor in enough to cover two outside speakers (airfare and hotel). There are two keynotes for this event and depending on the event those may be paid gigs. I’m not sure if they are or not, but considering the closing one is a comedian I’m going to add some funds here for that just in case. Total Hard Costs: $10700 Now that the hard costs are out of the way, let’s talk about the food costs. Food Costs The conference is providing a continental breakfast (YEEEESH!), some level of luncheon, and I have to assume coffee breaks in between. Let’s look at those costs. Continental Breakfast $12 per person Lunch Buffet $18 per person Coffee Breaks (2) $6 per person (or $3 a cup) Snacks (2) $10 per person (or $5 each) Note that the lunch buffet assumes a *good* lunch buffet – two entrees, starch, vegetable, salads, and bread. Not sure if there’ll be snacks during coffee breaks but let’s assume so. Total Food Cost Per Person: $46 Food Cost: $14950 Gratuity: $2691 Total Food Cost: $17641 Total food cost is based on the $46 per person cost x 325. 300 for attendance, 12 for speakers, extra 13 for volunteers/organizers. Gratuity is 18%. Grand Totals So let’s sum things up here. Total Costs Hard Costs: $10700.00 Food Costs: $17641.00 Total:          $28341.00 Taxes:         $3685.00 Grand Total  $32026.00 Total Revenue Sponsorship  $20750 Registration   $74700 Grand Total   $95450.00 Total Profit $63424.00 Now what if the registration numbers were lower and they only got 100 people to show up. In that scenario there’d still be a profit of just under $26000. Closing Comments A couple of things to note: - I haven’t factored in anything for prizes. Not sure if any will be given out - We didn’t add in the booth space revenue - We’re assuming speakers aren’t getting paid, but even if they were at the high end its $12000 ($1000 per session), which is probably an inflated number for local speakers. - Note that all registrations were set to the “member” discounted price. The non-member registration price is higher. There is also an option for those that just want to show up for the opening keynote. There you have it! Let me know if you have any questions. D

    Read the article

  • Force SSRS 2008 to use SSRS 2005 CSV rendering

    - by Kash
    We are upgrading our report server from SSRS 2005 to SSRS 2008 R2. I have an issue with CSV export rendering for SSRS 2008 where the SUM of columns are appearing on the right side of the detail values in 2008 instead of the left side like in 2005 as shown in the below blocks. 117 and 131 are the sums of Column2 and Column3 respectively. SSRS 2005 CSV Output Column2_1,Column3_1,Column2,Column3 117,131,1,2 117,131,1,2 117,131,60,23 117,131,30,15 117,131,25,89 SSRS 2008 CSV Output Column2,Column3,Column2_1,Column3_1 1,2,117,131 1,2,117,131 60,23,117,131 30,15,117,131 25,89,117,131 I understand that the CSV renderer has gone through major changes in SSRS 2008 R2 with the support for charts and gauges and more importantly it provides 2 modes: the default Excel mode and Compliant mode. But neither mode helps fix this issue. The Compliant mode was supposed to be closest to that of 2005 but apparently it is not close enough for my case. My Question: Is there a way to force SSRS 2008 fall back a report to a backward compatibility mode so that it exports into a 2005 CSV format? Solution tried: a) Using 2005-based CRIs Based on this article on ExecutionLog2, if SSRS 2008 R2 encounters a report whose auto-upgrade is not possible (e.g. reports that were built with 2005-based CustomReportItem controls), those particular reports will be processed with the old Yukon engine in a "transparent backwards-compatibility mode". It seems like it falls back to its previous version mode (2005) and attempts to render it. So I tried using a 2005-based barcode CustomReportItem and deployed to a SSRS 2008 R2 report server, but it shows the same result as before though it suppressed the barcode. This would be because SSRS 2008 R2 finds a way to suppress part of the report output and displays the rest. It would be great to find a 2005-based CRI that makes SSRS 2008 R2 process it with its old Yukon engine. Please note that quite possibly, even if it uses the "old Yukon processing engine", it might still use the new CSV renderer hence it shows the same output. If that is true, then this option is moot. b) Using XML renderer We can use a custom XML renderer and then use XSLT to convert the xml to appropriate CSV but this would mean that we need to convert all our 200 reports. Hence this is not feasible. Please note that we do not have the option of having SSRS 2005 and SSRS 2008 R2 deployed side by side.

    Read the article

  • Timeout Expired error Using LINQ

    - by Refracted Paladin
    I am going to sum up my problem first and then offer massive details and what I have already tried. Summary: I have an internal winform app that uses Linq 2 Sql to connect to a local SQL Express database. Each user has there own DB and the DB stay in sync through Merge Replication with a Central DB. All DB's are SQL 2005(sp2or3). We have been using this app for over 5 months now but recently our users are getting a Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Detailed: The strange part is they get that in two differnt locations(2 differnt LINQ Methods) and only the first time they fire in a given time period(~5mins). One LINQ method is pulling all records that match a FK ID and then Manipulating them to form a Heirarchy View for a TreeView. The second is pulling all records that match a FK ID and dumping them into a DataGridView. The only things I can find in common with the 2 are that the first IS an IEnumerable and the second converts itself from IQueryable - IEnumerable - DataTable... I looked at the query's in Profiler and they 'seemed' normal. They are not very complicated querys. They are only pulling back 10 - 90 records, from one table. Any thoughts, suggestions, hints whatever would be greatly appreciated. I am at my wit's end on this.... public IList<CaseNoteTreeItem> GetTreeViewDataAsList(int personID) { var myContext = MatrixDataContext.Create(); var caseNotesTree = from cn in myContext.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate descending, cn.InsertDate descending select new CaseNoteTreeItem { CaseNoteID = cn.CaseNoteID, NoteContactDate = Convert.ToDateTime(cn.ContactDate). ToShortDateString(), ParentNoteID = cn.ParentNote, InsertUser = cn.InsertUser, ContactDetailsPreview = cn.ContactDetails.Substring(0, 75) }; return caseNotesTree.ToList<CaseNoteTreeItem>(); } AND THIS ONE public static DataTable GetAllCNotes(int personID) { using (var context = MatrixDataContext.Create()) { var caseNotes = from cn in context.tblCaseNotes where cn.PersonID == personID orderby cn.ContactDate select new { cn.ContactDate, cn.ContactDetails, cn.TimeSpentUnits, cn.IsCaseLog, cn.IsPreEnrollment, cn.PresentAtContact, cn.InsertDate, cn.InsertUser, cn.CaseNoteID, cn.ParentNote }; return caseNotes.ToList().CopyLinqToDataTable(); } }

    Read the article

  • Revisiting .NET, but what should I focus on?

    - by Wayne M
    After about a two-year hiatus, I'm brushing up on my .NET skills to find a .NET job (my previous two positions have very little development, or development using legacy technologies, so apart from a few very minor apps I have not touched .NET in close to two years). I'm aware of things like ASP.NET MVC, and I have previously read on things like NHibernate and DI/IOC, albeit I have yet to use them apart from very trivial "Hello World" type applications. I have a subscription to Rob Conery's Tekpub website and occasionally watch these videos when I have free time. My concern is this: I don't live in a very technical area. I would be surprised if any but the most tech-savvy companies have heard of, let alone use, ASP.NET MVC, NHibernate (or even LINQ/EF), or know about IoC. I would be willing to bet a large sum of money that 95% of the possible jobs I could obtain will use the following: Visual Source Safe, if any VCS at all ASP.NET 2.0 Webforms (3.5 if lucky) Raw ADO.NET on top of a very thin implementation of the Gateway pattern Stored Procedures in the database for most CRUD operations Gratuitous use of code-behind, with a Service layer if I'm lucky If I were extremely lucky, I might find a shop that has heard of ORMs and either uses one, or has wrote their own data abstraction. Also if I were lucky, the company would be using Model-View-Presenter. In light of this I'm not sure what I should focus on learning. Personally, I would prefer to be using the latest stuff - ASP.NET MVC, NHibernate, jQuery, WCF etc. Reality says I should go back to the basics, since it looks like most potential opportunities aren't going to be anywhere near the cutting edge, or anywhere close to it. And, as much as I would like to find a position and start to show the other developers the benefits, in my past experience this has usually resulted in my being fired for "not being a team player" and doing things the bad old way. So, I am curious how you would approach a situation like this? What should I focus on, in order to A) Reaquaint myself with .NET, and B) Prepare myself to obtain a .NET job again that is more than likely going to use techniques that I and most other knowledgeable developers will scoff at?

    Read the article

  • Jquery-UI tabs : Double loading of the default tab

    - by Stephane
    I use jqueryui-tabs to display a tabbed UI. here is how my markup looks in a MasterPage: <div id="channel-tabs" class="ui-tabs"> <ul class="ui-tabs-nav"> <li><%=Html.ActionLink("Blogs", "Index", "Blog", new { query = Model.Query, lang = Model.SelectedLanguage, fromTo = Model.FromTo, filters = Model.FilterId }, new{ title="Blog Results" }) %></li> <li><%=Html.ActionLink("Forums", "Index", "Forums", new { query = Model.Query, lang = Model.SelectedLanguage, fromTo = Model.FromTo, filters = Model.FilterId }, null) %></li> <li><%=Html.ActionLink("Twitter", "Index", "Twitter", new { query = Model.Query, lang = Model.SelectedLanguage, fromTo = Model.FromTo, filters = Model.FilterId }, null) %></li> </ul> <div id="Blog_Results"> <asp:ContentPlaceHolder ID="ResultPlaceHolder" runat="server"> </asp:ContentPlaceHolder> </div> If the content is loaded via ajax, I return a partial view with the content of the tab. If the content is loaded directly, I load a page that include the content in the ContentPlaceHolder. somewhat like this : <asp:Content ID="Content2" ContentPlaceHolderID="BlogPlaceHolder" runat="server"> <%=Html.Partial("Partial",Model) %> </asp:Content> //same goes for the other tabs. With this in place, if I access the url "/Forums" It loads the forum content in the Blog tab first, trigger the ajax load of the Blog tab and replace the content with the blog content. I tried putting a different placeholder for each tab, but that didn't fix everything either, since when loading "/Forums" it will sure load the forum tab, but the Blog tab will show up first. Furthermore, when using separate placeholders, If I load the "/Blogs" url, It will first load the content statically in the Blog contentplaceholder and then trigger an ajax call to load it a second time and replace it. If I just link the tab to the hashtag, then when loading the forum tabs, I won't get the blog content... How would you achieve the expected behaviour? I feel like I might have a deeper probelm in the organization of my views. Is putting the tabs in the masterpage the way to go? Maybe I should just hijax the links manually and not rely on jquery-ui tabs to do the work for me. I cannot load all tabs by default and display them using the hash tags, I need an ajax loading because it is a search process that can be long. So to sum up : /Forum should load the forum tab, and let the other tabs be loaded with an ajax call when clicking on it. /Twitter should load the twitter tab and let the other tabs.... the same goes for /Blogs and any tabs I would add later. Any idea to have this working properly?

    Read the article

  • Adding rows to a data-bound DataGridView [Winforms]

    - by Mishko
    I want to bind a table from a database to a DataGridView, but I want to also add one more row with a sum of the values in the columns with indexes 3,4,7,8,9... How can I do that? Thanks! DataTable table1 = new DataTable(); double brutoUkupno1 = 0; double porezUkupno1 = 0; double doprinosUkupno1 = 0; double netoUkupno1 = 0; double doprinosTeretUkupno1 = 0; double topliObrokUkupno1 = 0; double regresUkupno1 = 0; Connection con = new Connection(); table1 = con.boundTable(month, Convert.ToInt32(year)); //This is method which returns DataTable table1.Rows.Add(null, null, null, null, null, null, null, null, null, null, null, null, null, null); table1.Rows.Add(null, null, null, null, null, null, null, null, null, null, null, null, null, null); dgv2.Visible = true; dgv2.DataSource = table1; for (int i = 0; i < dgv2.RowCount - 2; i++) { topliObrokUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[7].Value); regresUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[8].Value); brutoUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[9].Value); porezUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[10].Value); doprinosUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[11].Value); netoUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[12].Value); doprinosTeretUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[13].Value); //Now I am having problems with this below, putting things above to dgv2 : } dgv2.Rows[dgv2.Rows.Count - 1].Cells[0].Value = "Ukupno"; dgv2.Rows[dgv2.Rows.Count - 1].Cells[3].Value = month.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[4].Value = year.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[7].Value = topliObrokUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[8].Value = regresUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[9].Value = brutoUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[10].Value = porezUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[11].Value = doprinosUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[12].Value = netoUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[13].Value = doprinosTeretUkupno1.ToString(); dgv2.Rows[dgv2.RowCount - 2].Height = 3; dgv2.Rows[dgv2.RowCount - 2].DefaultCellStyle.BackColor = Color.Black;

    Read the article

  • MySql multiple selects batching in .net

    - by Amith George
    I have a situation in my application. For each x-axis point in my chart, I am plotting 5 y-axis values. To calculate each of these 5 values, I need to make 4 different queries. Ie, for each x-axis point I need to fire 20 sql queries. Now, I need to plot 40 such points in the my chart. Its resulting in a pathetic performance where it takes close to a minute to get all the data back from the database. Each of 4 different queries consists of a join between 2 tables. One has only 6 rows. The other close to 10,000. Each of the 4 queries has different WHERE clauses, so they are different queries. For each point in the x-axis, only the values for the where clauses change. I have tried combining each of the 4 queries into one big string. Basically batch the four selects. These are again batched for each y-axis value. So, for each x-axis point, I am now firing one big command that consists of 20 different select statements. Technically, I should be experiencing a big performance boost, right? Instead of hitting the db 40x5x4 = 800 times, I am now hitting it just 40 times. But instead of taking 60 seconds, it taking 50-55 seconds... not much of a help. I am using MySql 5.1, and the 6.1 version of its .Net connector. What can I do to improve the performance? Edit: One of the 4 queries is as follows: SELECT SUM(TIME_TO_SEC(TIMEDIFF(T1.col2, T1.col1))* T2.col1 / (3600 *1000)) AS TotalTime FROM Table T1 JOIN Table T2 ON T1.col3 = T2.col3 WHERE T1.col4 = 'i' AND T1.col1 >= '2009-12-25 00:00:00' AND T1.col2 <= '2009-12-26 00:00:00'; The other 3 queries are similar, only the where clause changes slightly. This set of 4 queries is fired 5 times. The first 3 times against the join of table T1 and T2, passing in different values for col4. And the next two times against the join of table T3 and T2 passing in different values for col4. These 5 values are the y-axis values for a particular x-axis point. The data returned by all these queries is the same format. so, we tried doing a UNION ALL on all these queries. No substantial difference. One strange thing, however, after indexing the foreign key on the table T1 [while it contained over a lakh records], the queries were using the index, but they had become slower. At times, the queries would take double the time to return the data.

    Read the article

  • Finding minimum cut-sets between bounded subgraphs

    - by Tore
    If a game map is partitioned into subgraphs, how to minimize edges between subgraphs? I have a problem, Im trying to make A* searches through a grid based game like pacman or sokoban, but i need to find "enclosures". What do i mean by enclosures? subgraphs with as few cut edges as possible given a maximum size and minimum size for number of vertices for each subgraph that act as a soft constraints. Alternatively you could say i am looking to find bridges between subgraphs, but its generally the same problem. Given a game that looks like this, what i want to do is find enclosures so that i can properly find entrances to them and thus get a good heuristic for reaching vertices inside these enclosures. So what i want is to find these colored regions on any given map. My Motivation The reason for me bothering to do this and not just staying content with the performance of a simple manhattan distance heuristic is that an enclosure heuristic can give more optimal results and i would not have to actually do the A* to get some proper distance calculations and also for later adding competitive blocking of opponents within these enclosures when playing sokoban type games. Also the enclosure heuristic can be used for a minimax approach to finding goal vertices more properly. A possible solution to the problem is the Kernighan-Lin algorithm: function Kernighan-Lin(G(V,E)): determine a balanced initial partition of the nodes into sets A and B do A1 := A; B1 := B compute D values for all a in A1 and b in B1 for (i := 1 to |V|/2) find a[i] from A1 and b[i] from B1, such that g[i] = D[a[i]] + D[b[i]] - 2*c[a][b] is maximal move a[i] to B1 and b[i] to A1 remove a[i] and b[i] from further consideration in this pass update D values for the elements of A1 = A1 / a[i] and B1 = B1 / b[i] end for find k which maximizes g_max, the sum of g[1],...,g[k] if (g_max > 0) then Exchange a[1],a[2],...,a[k] with b[1],b[2],...,b[k] until (g_max <= 0) return G(V,E) My problem with this algorithm is its runtime at O(n^2 * lg(n)), i am thinking of limiting the nodes in A1 and B1 to the border of each subgraph to reduce the amount of work done. I also dont understand the c[a][b] cost in the algorithm, if a and b do not have an edge between them is the cost assumed to be 0 or infinity, or should i create an edge based on some heuristic. Do you know what c[a][b] is supposed to be when there is no edge between a and b? Do you think my problem is suitable to use a multi level problem? Why or why not? Do you have a good idea for how to reduce the work done with the kernighan-lin algorithm for my problem?

    Read the article

  • How to configure maximum number of transport channels in WCF using basicHttpBinding?

    - by Hemant
    Consider following code which is essentially a WCF host: [ServiceContract (Namespace = "http://www.mightycalc.com")] interface ICalculator { [OperationContract] int Add (int aNum1, int aNum2); } [ServiceBehavior (InstanceContextMode = InstanceContextMode.PerCall)] class Calculator: ICalculator { public int Add (int aNum1, int aNum2) { Thread.Sleep (2000); //Simulate a lengthy operation return aNum1 + aNum2; } } class Program { static void Main (string[] args) { try { using (var serviceHost = new ServiceHost (typeof (Calculator))) { var httpBinding = new BasicHttpBinding (BasicHttpSecurityMode.None); serviceHost.AddServiceEndpoint (typeof (ICalculator), httpBinding, "http://172.16.9.191:2221/calc"); serviceHost.Open (); Console.WriteLine ("Service is running. ENJOY!!!"); Console.WriteLine ("Type 'stop' and hit enter to stop the service."); Console.ReadLine (); if (serviceHost.State == CommunicationState.Opened) serviceHost.Close (); } } catch (Exception e) { Console.WriteLine (e); Console.ReadLine (); } } } Also the WCF client program is: class Program { static int COUNT = 0; static Timer timer = null; static void Main (string[] args) { var threads = new Thread[10]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread (Calculate); threads[i].Start (null); } timer = new Timer (o => Console.WriteLine ("Count: {0}", COUNT), null, 1000, 1000); Console.ReadLine (); timer.Dispose (); } static void Calculate (object state) { var c = new CalculatorClient ("BasicHttpBinding_ICalculator"); c.Open (); while (true) { try { var sum = c.Add (2, 3); Interlocked.Increment (ref COUNT); } catch (Exception ex) { Console.WriteLine ("Error on thread {0}: {1}", Thread.CurrentThread.Name, ex.GetType ()); break; } } c.Close (); } } Basically, I am creating 10 proxy clients and then repeatedly calling Add service method. Now if I run both applications and observe opened TCP connections using netstat, I find that: If both client and server are running on same machine, number of tcp connections are equal to number of proxy objects. It means all requests are being served in parallel. Which is good. If I run server on a separate machine, I observed that maximum 2 TCP connections are opened regardless of the number of proxy objects I create. Only 2 requests run in parallel. It hurts the processing speed badly. If I switch to net.tcp binding, everything works fine (a separate TCP connection for each proxy object even if they are running on different machines). I am very confused and unable to make the basicHttpBinding use more TCP connections. I know it is a long question, but please help!

    Read the article

  • PyQt threads and signals - how to properly retrieve values

    - by Cawas
    Using Python 2.5 and PyQt, I couldn't find any question this specific in Python, so sorry if I'm repeating the other Qt referenced questions below, but I couldn't easily understand that C code. I've got two classes, a GUI and a thread, and I'm trying to get return values from the thread. I've used the link in here as base to write my code, which is working just fine. To sum it up and illustrate the question in code here (I don't think this code will run on itself): class MainWindow (QtGui.QWidget): # this is just a reference and not really relevant to the question def __init__ (self, parent = None): QtGui.QWidget.__init__(self, parent) self.thread = Worker() # this does not begin a thread - look at "Worker.run" for mor details self.connect(self.thread, QtCore.SIGNAL('finished()'), self.unfreezeUi) self.connect(self.thread, QtCore.SIGNAL('terminated()'), self.unfreezeUi) self.connect(self.buttonDaemon, QtCore.SIGNAL('clicked()'), self.pressDaemon) # the problem begins below: I'm not using signals, or queue, or whatever, while I believe I should def pressDaemon (self): self.buttonDaemon.setEnabled(False) if self.thread.isDaemonRunning(): self.thread.setDaemonStopSignal(True) self.buttonDaemon.setText('Daemon - converts every %s sec'% args['daemonInterval']) else: self.buttonConvert.setEnabled(False) self.thread.startDaemon() self.buttonDaemon.setText('Stop Daemon') self.buttonDaemon.setEnabled(True) # this whole class is just another reference class Worker (QtCore.QThread): daemonIsRunning = False daemonStopSignal = False daemonCurrentDelay = 0 def isDaemonRunning (self): return self.daemonIsRunning def setDaemonStopSignal (self, bool): self.daemonStopSignal = bool def __init__ (self, parent = None): QtCore.QThread.__init__(self, parent) self.exiting = False self.thread_to_run = None # which def will be running def __del__ (self): self.exiting = True self.thread_to_run = None self.wait() def run (self): if self.thread_to_run != None: self.thread_to_run(mode='continue') def startDaemon (self, mode = 'run'): if mode == 'run': self.thread_to_run = self.startDaemon # I'd love to be able to just pass this as an argument on start() below return self.start() # this will begin the thread # this is where the thread actually begins self.daemonIsRunning = True self.daemonStopSignal = False sleepStep = 0.1 # don't know how to interrupt while sleeping - so the less sleepStep, the faster StopSignal will work # begins the daemon in an "infinite" loop while self.daemonStopSignal == False and not self.exiting: # here, do any kind of daemon service delay = 0 while self.daemonStopSignal == False and not self.exiting and delay < args['daemonInterval']: time.sleep(sleepStep) # delay is actually set by while, but this holds for N second delay += sleepStep # daemon stopped, reseting everything self.daemonIsRunning = False self.emit(QtCore.SIGNAL('terminated')) Tho it's quite big, I hope this is pretty clear. The main point is on def pressDaemon. Specifically all 3 self.thread calls. The last one, self.thread.startDaemon() is just fine, and exactly as the example. I doubt that represents any issue. The problem is being able to set the Daemon Stop Signal and retrieve the value if it's running. I'm not sure that it's possible to set a stop signal on QtCore.QtThread, because I've tried doing the same way and it didn't work. But I'm pretty sure it's not possible to retrieve a return result from the emit. So, there it is. I'm using direct calls to the thread class, and I'm almost positive that's not a good design and will probably fail when running under stress. I read about that queue, but I'm not sure it's the proper solution here, or if I should be using Qt at all, since this is Python. And just maybe there's nothing wrong with the way I'm doing.

    Read the article

  • How do I create a partial function with generics in scala?

    - by Matteo Caprari
    Hello. I'm trying to write a performance measurements library for Scala. My idea is to transparently 'mark' sections so that the execution time can be collected. Unfortunately I wasn't able to bend the compiler to my will. An admittedly contrived example of what I have in mind: // generate a timing function val myTimer = mkTimer('myTimer) // see how the timing function returns the right type depending on the // type of the function it is passed to it val act = actor { loop { receive { case 'Int => val calc = myTimer { (1 to 100000).sum } val result = calc + 10 // calc must be Int self reply (result) case 'String => val calc = myTimer { (1 to 100000).mkString } val result = calc + " String" // calc must be String self reply (result) } Now, this is the farthest I got: trait Timing { def time[T <: Any](name: Symbol)(op: => T) :T = { val start = System.nanoTime val result = op val elapsed = System.nanoTime - start println(name + ": " + elapsed) result } def mkTimer[T <: Any](name: Symbol) : (() => T) => () => T = { type c = () => T time(name)(_ : c) } } Using the time function directly works and the compiler correctly uses the return type of the anonymous function to type the 'time' function: val bigString = time('timerBigString) { (1 to 100000).mkString("-") } println (bigString) Great as it seems, this pattern has a number of shortcomings: forces the user to reuse the same symbol at each invocation makes it more difficult to do more advanced stuff like predefined project-level timers does not allow the library to initialize once a data structure for 'timerBigString So here it comes mkTimer, that would allow me to partially apply the time function and reuse it. I use mkTimer like this: val myTimer = mkTimer('aTimer) val myString= myTimer { (1 to 100000).mkString("-") } println (myString) But I get a compiler error: error: type mismatch; found : String required: () => Nothing (1 to 100000).mkString("-") I get the same error if I inline the currying: val timerBigString = time('timerBigString) _ val bigString = timerBigString { (1 to 100000).mkString("-") } println (bigString) This works if I do val timerBigString = time('timerBigString) (_: String), but this is not what I want. I'd like to defer typing of the partially applied function until application. I conclude that the compiler is deciding the return type of the partial function when I first create it, chosing "Nothing" because it can't make a better informed choice. So I guess what I'm looking for is a sort of late-binding of the partially applied function. Is there any way to do this? Or maybe is there a completely different path I could follow? Well, thanks for reading this far -teo

    Read the article

  • How to create a compiler in vb.net

    - by Cyclone
    Before answering this question, understand that I am not asking how to create my own programming language, I am asking how, using vb.net code, I can create a compiler for a language like vb.net itself. Essentially, the user inputs code, they get a .exe. By NO MEANS do I want to write my own language, as it seems other compiler related questions on here have asked. I also do not want to use the vb.net compiler itself, nor do I wish to duplicate the IDE. The exact purpose of what I wish to do is rather hard to explain, but all I need is a nudge in the right direction for writing a compiler (from scratch if possible) which can simply take input and create a .exe. I have opened .exe files as plain text before (my own programs) to see if I could derive some meaning from what I assumed would be human readable text, yet I was obviously sorely disappointed to see the random ascii, though it is understandable why this is all I found. I know that a .exe file is simply lines of code, being parsed by the computer it is on, but my question here really boils down to this: What code makes up a .exe? How could I go about making one in a plain text editor if I wanted to? (No, I do not want to do that, but if I understand the process my goals will be much easier to achieve.) What makes an executable file an executable file? Where does the logic of the code fit in? This is intended to be a programming question as opposed to a computer question, which is why I did not post it on SuperUser. I know plenty of information about the System.IO namespace, so I know how to create a file and write to it, I simply do not know what exactly I would be placing inside this file to get it to work as an executable file. I am sorry if this question is "confusing", "dumb", or "obvious", but I have not been able to find any information regarding the actual contents of an executable file anywhere. One of my google searches Something that looked promising EDIT: The second link here, while it looked good, was an utter fail. I am not going to waste hours of my time hitting keys and recording the results. "Use the "Alt" and the 3-digit combinations to create symbols that do not appear on the keyboard but that you need in the program." (step 4) How the heck do I know what symbols I need??? Thank you very much for your help, and my apologies if this question is a nooby or "bad" one. To sum this up simply: I want to create a program in vb.net that can compile code in a particular language to a single executable file. What methods exist that can allow me to do this, and if there are none, how can I go about writing my own from scratch?

    Read the article

  • Is it normal for a programmer with 2 years experience to take a long time to code simple programs?

    - by ajax81
    Hi all, I'm a relatively new programmer (18 months on the scene), and I'm finally getting to the point where I'm comfortable accepting projects and developing solutions under minimal supervision. Unfortunately, this also means that I've become acutely aware of my performance shortfalls, the most prevalent of which is the amount of time it takes me to develop, test, and submit algorithms for review. A great example of what I'm talking about occurred this week when I was tasked with developing a simple XML web service (asp.net 3.5) callable via client-side JavaScript, that accepts a single parameter and returns a dataset output to a modal window (please note this is the first time I've had to develop a web service and have had ZERO experience creating/consuming them...let alone calling them from JS client side). Keeping a long story short -- I worked on it for 4 days straight, all day each day, for a grand total of 36 hours, not including the time I spent dwelling on the problem in the shower, the morning commute, and laying awake in bed at night. I learned a great deal about web services and xml/json/javascript...but was called in for a management review to discuss the length of time it took me to develop the solution. In the meeting, I was praised for the quality of my work and was in fact told that my effort was commendable. However, they (senior leads and pm's) weren't impressed with the amount of time it took me to develop the solution and expressed that they would have liked to see the solution in roughly 1/3 of the time it took me. I guess what concerns me the most is that I've identified this pattern as common for myself. Between online videos, book research, and trial/error coding...if its something I haven't seen before, I can spend up to two weeks on a problem that seems to only take the pros in the videos moments to code up. And of course, knowing that management isn't happy with this pattern has shaken me up a bit. To sum up, I have some very specific questions I'd like to ask, and would greatly appreciate your objective professional feedback. Is my experience as a junior programmer common among new developers? Or is it possible that I'm just not cut out for the work? If you suspect that my experience is not common and that there may be an aptitude issue, do you have any suggestions/solutions that I could propose to management to help bring me up to speed? Do seasoned, professional programmers ever encounter knowledge barriers that considerably delay deliverables? When you started out in the industry, did you know how to "do it all"? If not, how long did it take you to be perceived as "proficient"? Was it a natural progression of trial and error, or was there a particular zen moment when you knew you had achieved super saiyen power level? Anyways, thanks for taking the time to read my question(s). I don't know if this is the right place to ask for professional career guidance, but I greatly appreciate your willingness to help me out. Cheers, Daniel

    Read the article

  • R Install/Update on Mac OS X (Snow Leopard): where does R put files during install/config?

    - by doug
    In sum, there's a stray preference-like file or two (probably just one) that i can't find. Here's the whole story: I recently attempted to update my R install from 2.10 to 2.11. As i have done before, i installed from source. I know that all of the dependencies are correctly installed and made available to R, because my prior install worked fine. When i upgraded to 2.11, i am unable to install packages (no exception thrown, it just doesn't appear to complete the install and is unresponsive so i have to quit + restart R. Given i install from source, there are any number of points in the process that i could have messed up. What i need to do now is "start over" which requires that i clear out my my prior install. I am having trouble doing that. There is still at least one preference file or something like that i can't find and one of these is causing the problem, so i need to find it and terminate it with extreme prejudice before i do a fresh install. Although i set a number of flags during the install, i have never opted out of the default install locations during the config step. There has to be one or more preference files still in my file structure (and that's also accessible to the new install of R) because after i follow all of the steps below, then do a fresh install, when i start R for the first time, some of my preferences have persisted (e.g., quartz settings, GUI background color, editor selection, etc.). Again, the problem is that i just cannot locate those files. Finally, the problem can't be that during my last install from source, i inadvertently caused a preference file to be sent to an off-spec location--because again, a fresh R install (whether from source or from the OS X binaries) is finding those files Here's what i've done prior to attempting a clean install of R: Removed files from these locations: ~/.RData ~/.RHistory /Applications/R64.app /Applications/R.app /Library/Frameworks/R.framework (i also removed all symlinks from these) Cleared all RAM and disk caches, in particular the directory where i know R caches: ~/Library/Caches/R* (in fact i've cleared this entire directory) Checked for all 'hidden' files in the OS X directories where login/startup files are often placed: /etc/ ~/ In addition, i've checked R-help, and i've also read through the relevant portions of 'R Installation and Administration'--no luck. I've also searched searched my file structure using the various bash utilities, which nearly always solves problems of this sort quite easily, but in this case obviously searching by name or even pattern is more problematic.

    Read the article

  • Can't create file in Ada 95

    - by duder
    Hello, I'm trying to follow a standard reference for opening files but running into a constraint_error at the line when I call Ada.Text_IO.Create(). It says "range check failed". Any help appreciated, here's the code: WITH Ada.Text_IO; WITH Ada.Integer_Text_IO; USE Ada.Text_IO; USE Ada.Integer_Text_IO; PROCEDURE FileManip IS --Variables Start_Int : Integer; Stop_Int : Integer; Max_Length : Integer; --Output File MaxName : CONSTANT Positive := 80; SUBTYPE NameRange IS Positive RANGE 1..MaxName; OutFileName : String(NameRange) := (OTHERS => '#'); OutNameLength : NameRange; OutData : File_Type; --Array TYPE Chain_Array IS ARRAY(1..500) OF Integer; Sum : Integer := 1; BEGIN --Inputs Ada.Text_IO.Put(Item => "Enter a starting Integer: "); Ada.Integer_Text_IO.Get(Item => Start_Int); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a stopping Integer: "); Ada.Integer_Text_IO.Get(Item => Stop_Int); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a Maximum Length to search: "); Ada.Integer_Text_IO.Get(Item => Max_Length); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a output file name > "); Ada.Text_IO.Get_Line( Item => OutFileName, Last => OutNameLength); Ada.Text_IO.Create( File => OutData, Mode => Ada.Text_IO.Out_File, Name => OutFileName(1..OutNameLength)); Ada.Text_IO.New_Line;

    Read the article

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

  • How to optimize this mysql query - explain output included

    - by Sandeepan Nath
    This is the query (a search query basically, based on tags):- select SUM(DISTINCT(ttagrels.id_tag in (2105,2120,2151,2026,2046) )) as key_1_total_matches, td.*, u.* from Tutors_Tag_Relations AS ttagrels Join Tutor_Details AS td ON td.id_tutor = ttagrels.id_tutor JOIN Users as u on u.id_user = td.id_user where (ttagrels.id_tag in (2105,2120,2151,2026,2046)) group by td.id_tutor HAVING key_1_total_matches = 1 And following is the database dump needed to execute this query:- CREATE TABLE IF NOT EXISTS `Users` ( `id_user` int(10) unsigned NOT NULL auto_increment, `id_group` int(11) NOT NULL default '0', PRIMARY KEY (`id_user`), KEY `Users_FKIndex1` (`id_group`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=730 ; INSERT INTO `Users` (`id_user`, `id_group`) VALUES (303, 1); CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) unsigned NOT NULL auto_increment, `id_user` int(10) NOT NULL default '0', PRIMARY KEY (`id_tutor`), KEY `Users_FKIndex1` (`id_user`), KEY `id_user` (`id_user`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=58 ; INSERT INTO `Tutor_Details` (`id_tutor`, `id_user`) VALUES (26, 303); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2957 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (2026, 'Brendan.\nIn'), (2046, 'Brendan.'), (2105, 'Brendan'), (2120, 'Brendan''s'), (2151, 'Brendan)'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) unsigned default NULL, `tutor_field` varchar(255) default NULL, `cdate` timestamp NOT NULL default CURRENT_TIMESTAMP, `udate` timestamp NULL default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`), KEY `id_tutor_2` (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`, `tutor_field`, `cdate`, `udate`) VALUES (2105, 26, 'firstname', '2010-06-17 17:08:45', NULL); ALTER TABLE `Tutors_Tag_Relations` ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_2` FOREIGN KEY (`id_tutor`) REFERENCES `Tutor_Details` (`id_tutor`) ON DELETE NO ACTION ON UPDATE NO ACTION, ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_1` FOREIGN KEY (`id_tag`) REFERENCES `Tags` (`id_tag`) ON DELETE NO ACTION ON UPDATE NO ACTION; What the query does? This query actually searches tutors which contain "Brendan"(as their name or biography or something). The id_tags 2105,2120,2151,2026,2046 are nothing but the tags which are LIKE "%Brendan%". My question is :- 1.In the explain of this query, the reference column shows NULL for ttagrels, but there are possible keys (Tutors_Tag_Relations,id_tutor,id_tag,id_tutor_2). So, why is no key being taken. How to make the query take references. Is it possible at all? 2. The other two tables td and u are using references. Any indexing needed in those? I think not. Check the explain query output here http://www.test.examvillage.com/explain.png

    Read the article

  • Error using `loess.smooth` but not `loess` or `lowess`

    - by Sandy
    I need to smooth some simulated data, but occasionally run into problems when the simulated ordinates to be smoothed are mostly the same value. Here is a small reproducible example of the simplest case. > x <- 0:50 > y <- rep(0,51) > loess.smooth(x,y) Error in simpleLoess(y, x, w, span, degree, FALSE, FALSE, normalize = FALSE, : NA/NaN/Inf in foreign function call (arg 1) loess(y~x), lowess(x,y), and their analogue in MATLAB produce the expected results without error on this example. I am using loess.smooth here because I need the estimates evaluated at a set number of points. According to the documentation, I believe loess.smooth and loess are using the same estimation functions, but the former is an "auxiliary function" to handle the evaluation points. The error seems to come from a C function: > traceback() 3: .C(R_loess_raw, as.double(pseudovalues), as.double(x), as.double(weights), as.double(weights), as.integer(D), as.integer(N), as.double(span), as.integer(degree), as.integer(nonparametric), as.integer(order.drop.sqr), as.integer(sum.drop.sqr), as.double(span * cell), as.character(surf.stat), temp = double(N), parameter = integer(7), a = integer(max.kd), xi = double(max.kd), vert = double(2 * D), vval = double((D + 1) * max.kd), diagonal = double(N), trL = double(1), delta1 = double(1), delta2 = double(1), as.integer(0L)) 2: simpleLoess(y, x, w, span, degree, FALSE, FALSE, normalize = FALSE, "none", "interpolate", control$cell, iterations, control$trace.hat) 1: loess.smooth(x, y) loess also calls simpleLoess, but with what appears to be different arguments. Of course, if you vary enough of the y values to be nonzero, loess.smooth runs without error, but I need the program to run in even the most extreme case. Hopefully, someone can help me with one and/or all of the following: Understand why only loess.smooth, and not the other functions, produces this error and find a solution for this problem. Find a work-around using loess but still evaluating the estimate at a specified number of points that can differ from the vector x. For example, I might want to use only x <- seq(0,50,10) in the smoothing, but evaluate the estimate at x <- 0:50. As far as I know, using predict with a new data frame will not properly handle this situation, but please let me know if I am missing something there. Handle the error in a way that doesn't stop the program from moving onto the next simulated data set. Thanks in advance for any help on this problem.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >