Search Results

Search found 21392 results on 856 pages for 'order of operations'.

Page 524/856 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • UI-Automation cmdlet not finding the control

    - by sc_ray
    Hi, I am trying to test a WPF application using the UI-Automation framework that MSFT provides. There were a few powershell scripts written that invoked the cmdlets created to manipulate the visual controls of the application. There is a DropDown within my application that has an entry 'DropDownEntry'. In my cmdlet, I am trying to do something as follows: AutomationElement getItem = DropDown.FindFirst(TreeScope.Descendants, new AndCondition( new PropertyCondition(AutomationElement.ControlTypeProperty,ControlType.ListItem), new PropertyCondition(AutomationElement.NameProperty, "DropDownEntry",PropertyConditionFlags.IgnoreCase))); The above given snippet returns 'null' upon execution which essentially means that the above given logic was unable to find my dropdown entry. Can somebody tell me why this might be happening? I checked the name of my control and the values. Everything seems to be in order. I am not sure why this would be happening. Any help would be much appreciated. Thanks

    Read the article

  • Eclipse - no Java (JRE) / (JDK) ... no virtual machine...

    - by Wallter
    I am trying to get Eclipse Galileo to re-run on my computer - i have run it before with no problems but now i keep getting this error: A java Runtime Environment (JRE) or Java Development kit (JDK) must be available in order to run Eclipse. No Java virtual machine was found after searching the following locations: C:\eclipse\jre\javaw.exe javaw.exe in your current PATH I've just done a fresh install of both the JDK and the SDK I have Windows 7 (x64) what's up with this? / how do i fix it-? UPDATE- i can't run any of the ipconfig / tracert / ping

    Read the article

  • Catching an exception when trying to launch a URL service from within Safari on an iPhone

    - by Ariel
    Hey guys, I was wondering if anyone knows - if I have an iPhone app that is registered with a URL service (e.g. alocola://), which means another app can invoke it by calling its URL - is there any way to embed this URL in an HTML page, and catch the exception if the app is not installed on the user's iPhone? Or in other words / more details: I would like to write an app that conforms to some URL invocation (like alocola:// is doing), but would like this app to be launched from a web page read by mobile Safari. However, if the app is not installed on the user's iphone, I would like to be able to display a note saying "you must have the alocola app installed on your iPhone, please download it from the app store by clicking here". Is there a way (perhaps using javascript) to have Safari indicate that the URL cannot be invoked, and catch this in order to display an intelligent message? Thanks :-) Ariel

    Read the article

  • J2me Blackberry Numeric Input

    - by Paul
    Hello, I am developing a blackberry application using j2me and LWUIT (blackberry port). Everything works great except for the TextField in numeric mode. Basically when you have focus on the TextField you have to first go into "NUMERIC" mode (by pressing alt + aA) in order to input, which is not user friendly and a problem. The proposed solution is to use a TextArea instead that allows you to open a NATIVE type input box. The problem there is that the user needs to focus the field and then press the fire button which again is unfriendly. Does anyone know of any simple solutions? The few solutions i have in mind (but not sure how to do them): 1) Capture any keypress on the TextArea and go into NATIVE mode, instead of just the fire key. 2) Put the blackberry input mode into numeric using code for the whole form. Any advice will be appreciated. Many Thanks, Paul

    Read the article

  • ruby on rails named scope implementation

    - by Engwan
    From the book Agile Web Development With Rails class Order < ActiveRecord::Base named_scope :last_n_days, lambda { |days| {:conditions => ['updated < ?' , days] } } named_scope :checks, :conditions => {:pay_type => :check} end The statement orders = Orders.checks.last_n_days(7) will result to only one query to the database. How does rails implement this? I'm new to Ruby and I'm wondering if there's a special construct that allows this to happen. To be able to chain methods like that, the functions generated by named_scope must be returning themselves or an object than can be scoped further. But how does Ruby know that it is the last function call and that it should query the database now? I ask this because the statement above actually queries the database and not just returns an SQL statement that results from chaining.

    Read the article

  • Subsonic 3.0 Query limit with MySQL c#.net LinQ

    - by omegawkd
    Hello, a quick question which may or may not be easily answered. Currently, in order to return a limited result set of data to my calling reference using SubSonic I use a similar function as below: _DataSet = from CatSet in t2_aspnet_shopping_item_category.All() join CatProdAssignedLink in t2_aspnet_shopping_link_categoryproduct.All() on CatSet.CategoryID equals CatProdAssignedLink.CategoryID join ProdSet in t2_aspnet_shopping_item_product.All() on CatProdAssignedLink.ProductID equals ProdSet.ProductID where ProdSet.ProductID == __ProductID orderby CatProdAssignedLink.LinkID ascending select CatSet; and select the first item from the data set. Is there a way to limit the lookup initially to a certain amount of rows? I'm using MySQL as the base database.

    Read the article

  • JPA + Hibernate + Named Query + how to JOIN a subquery result

    - by Srihari
    Can anybody help me in converting the following native query into a Named Query? Native Query: SELECT usr1.user_id, urr1.role_id, usr2.user_id, urr2.role_id, usr1.school_id, term.term_name, count(material.material_id) as "Total Book Count", fpc.FOLLETT_PENDING_COUNT as "Follett Pending Count", rrc.RESOLUTION_REQUIRED_COUNT as "Resolution Required Count" FROM va_school sch JOIN va_user_school_rel usr1 on sch.school_id=usr1.school_id JOIN va_user_role_rel urr1 on usr1.user_id=urr1.user_id and urr1.role_id=1001 JOIN va_user_school_rel usr2 on sch.school_id=usr2.school_id JOIN va_user_role_rel urr2 on usr2.user_id=urr2.user_id and urr2.role_id=1002 JOIN va_term term on term.school_id = usr1.school_id JOIN va_class course on course.term_id = term.term_id JOIN va_material material on material.class_id = course.class_id LEFT JOIN (SELECT VA_CLASS.TERM_ID as "TERM_ID", COUNT(*) as "FOLLETT_PENDING_COUNT" FROM VA_CLASS JOIN VA_MATERIAL ON VA_MATERIAL.CLASS_ID = VA_CLASS.CLASS_ID WHERE VA_CLASS.reference_flag = 'A' AND trunc(VA_MATERIAL.FOLLETT_STATUS) = 0 GROUP BY VA_CLASS.TERM_ID) fpc on term.term_id = fpc.term_id LEFT JOIN (SELECT VA_CLASS.TERM_ID as "TERM_ID", COUNT(*) as "RESOLUTION_REQUIRED_COUNT" FROM VA_CLASS JOIN VA_MATERIAL ON VA_MATERIAL.CLASS_ID = VA_CLASS.CLASS_ID WHERE VA_CLASS.reference_flag = 'A' AND trunc(VA_MATERIAL.FOLLETT_STATUS) = 1 GROUP BY VA_CLASS.TERM_ID) rrc on term.term_id = rrc.term_id WHERE course.reference_flag = 'A' GROUP BY usr1.user_id, urr1.role_id, usr2.user_id, urr2.role_id, usr1.school_id, term.term_name, fpc.FOLLETT_PENDING_COUNT, rrc.RESOLUTION_REQUIRED_COUNT ORDER BY usr1.school_id, term.term_name; Thanks in advance. Srihari

    Read the article

  • Python: Getting INVALID response from PayPal's Sandbox IPN, slowly going insane...

    - by thepeanut
    Hi All I am trying to implement a simple online payment system using PayPal, however I have tried everything I know and am still getting an INVALID response. I know it's nothing too simple, because I get a VERIFIED response when using the IPN simulator. I have tried putting the items into a dict first, I have tried fixing the encoding, and still nothing. PayPal says the reasons for an INVALID response could be: Sending wrong items or in wrong order (pretty sure it's not this) Sending to the wrong address (definitely not this) Encoding items incorrectly (I dont think it's this, set encoding to UTF-8 on both paypal and my script) The following is the snippet concerned: f = cgi.FieldStorage() newparams = 'cmd=_notify-validate' for key in f.keys(): val = f[key].value newparams += '&' + urlencode({key: val.encode('utf-8')}) req = urllib2.Request(PP_URL, newparams) req.add_header("Content-type", "application/x-www-form-urlencoded") http = urllib2.urlopen(req) ret = http.read() fi.write(ret + '\n') if ret == 'VERIFIED': #*do stuff* Can anyone suggest anything I can do to fix this?! Cheers Sam

    Read the article

  • Tomcat JAXB 1 and 2 linkage error

    - by Alex
    I'm running a tomcat 6, spring, apache cxf webservice, know it is a must to add one third party library to my webapp to fulfill an order. I have jaxb-impl-2.1.12.jar for apache cxf in WEB-INF/lib folder and the new library which contains the JAXB 1.0 runtime. JAXB 2 ist used by apache cxf for dynamic clients (i need them). So is there a possibility to run the webapps with both libraries? Error: Caused by: java.lang.LinkageError: You are trying to run JAXB 2.0 runtime but you have old JAXB 1.0 runtime earlier in the classpath. Please remove the JAXB 1.0 runtime for 2.0 runtime to work correctly.

    Read the article

  • Adding a sortcomparefunction to Dynamic Data Grid in flex

    - by Tom
    Hi, I am trying to create a dynamic datagrid in Flex 3, I have a list of columns a list of objects which correspond to datapoints for those columns which I fetch from a url. While the grid works perfectly fine the problem is that sorting on the columns is done in lexical order. I am aware that this can be fixed by adding a sortcomparefunction to a column, which is not easy for this case. I have tried doing var dgc:DataGridColumn = new DataGridColumn(dtf); f1[dtf] = function(obj1:Object, obj2:Object):int { return Comparators.sortNumeric(obj1[dtf],obj2[dtf]); }; dgc.sortCompareFunction = f1[dtf];` But the problem is that the function object that I am creating here is being overwritten in every iteration (as I am adding columns) and eventually all the columns will have sorting done only on the last column added. Suggestions please.

    Read the article

  • Debugging problems in Visual Studio 2005 - No source code available for the current location

    - by espais
    Hi all I've searched up and down Google for others with a similar problem, and while I can find the error I don't think that other people have the same base problem that I do. Basically, I had to create a project for a unit-testing environment in order to run this test suite. First, I add my original C file, compile, and then a test file (C++) is generated. I then exclude my original source from the project, include this test script (which includes the original source at the top), and then run. I can debug the test file fine, but when it jumps to the original C file I get the dreaded 'no source code available for the current location' error. Both files are located within the same location, and I compiled the original file without any issue. Anybody have any thoughts about this? Its driving me crazy!

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • .NET Security Part 2

    - by Simon Cooper
    So, how do you create partial-trust appdomains? Where do you come across them? There are two main situations in which your assembly runs as partially-trusted using the Microsoft .NET stack: Creating a CLR assembly in SQL Server with anything other than the UNSAFE permission set. The permissions available in each permission set are given here. Loading an assembly in ASP.NET in any trust level other than Full. Information on ASP.NET trust levels can be found here. You can configure the specific permissions available to assemblies using ASP.NET policy files. Alternatively, you can create your own partially-trusted appdomain in code and directly control the permissions and the full-trust API available to the assemblies you load into the appdomain. This is the scenario I’ll be concentrating on in this post. Creating a partially-trusted appdomain There is a single overload of AppDomain.CreateDomain that allows you to specify the permissions granted to assemblies in that appdomain – this one. This is the only call that allows you to specify a PermissionSet for the domain. All the other calls simply use the permissions of the calling code. If the permissions are restricted, then the resulting appdomain is referred to as a sandboxed domain. There are three things you need to create a sandboxed domain: The specific permissions granted to all assemblies in the domain. The application base (aka working directory) of the domain. The list of assemblies that have full-trust if they are loaded into the sandboxed domain. The third item is what allows us to have a fully-trusted API that is callable by partially-trusted code. I’ll be looking at the details of this in a later post. Granting permissions to the appdomain Firstly, the permissions granted to the appdomain. This is encapsulated in a PermissionSet object, initialized either with no permissions or full-trust permissions. For sandboxed appdomains, the PermissionSet is initialized with no permissions, then you add permissions you want assemblies loaded into that appdomain to have by default: PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); // all assemblies need Execution permission to run at all restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); // grant general read access to C:\config.xml restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, @"C:\config.xml")); // grant permission to perform DNS lookups restrictedPerms.AddPermission( new DnsPermission(PermissionState.Unrestricted)); It’s important to point out that the permissions granted to an appdomain, and so to all assemblies loaded into that appdomain, are usable without needing to go through any SafeCritical code (see my last post if you’re unsure what SafeCritical code is). That is, partially-trusted code loaded into an appdomain with the above permissions (and so running under the Transparent security level) is able to create and manipulate a FileStream object to read from C:\config.xml directly. It is only for operations requiring permissions that are not granted to the appdomain that partially-trusted code is required to call a SafeCritical method that then asserts the missing permissions and performs the operation safely on behalf of the partially-trusted code. The application base of the domain This is simply set as a property on an AppDomainSetup object, and is used as the default directory assemblies are loaded from: AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = @"C:\temp\sandbox", }; If you’ve read the documentation around sandboxed appdomains, you’ll notice that it mentions a security hole if this parameter is set correctly. I’ll be looking at this, and other pitfalls, that will break the sandbox when using sandboxed appdomains, in a later post. Full-trust assemblies in the appdomain Finally, we need the strong names of the assemblies that, when loaded into the appdomain, will be run as full-trust, irregardless of the permissions specified on the appdomain. These assemblies will contain methods and classes decorated with SafeCritical and Critical attributes. I’ll be covering the details of creating full-trust APIs for partial-trust appdomains in a later post. This is how you get the strongnames of an assembly to be executed as full-trust in the sandbox: // get the Assembly object for the assembly Assembly assemblyWithApi = ... // get the StrongName from the assembly's collection of evidence StrongName apiStrongName = assemblyWithApi.Evidence.GetHostEvidence<StrongName>(); Creating the sandboxed appdomain So, putting these three together, you create the appdomain like so: AppDomain sandbox = AppDomain.CreateDomain( "Sandbox", null, appDomainSetup, restrictedPerms, apiStrongName); You can then load and execute assemblies in this appdomain like any other. For example, to load an assembly into the appdomain and get an instance of the Sandboxed.Entrypoint class, implementing IEntrypoint, you do this: IEntrypoint o = (IEntrypoint)sandbox.CreateInstanceFromAndUnwrap( "C:\temp\sandbox\SandboxedAssembly.dll", "Sandboxed.Entrypoint"); // call method the Execute method on this object within the sandbox o.Execute(); The second parameter to CreateDomain is for security evidence used in the appdomain. This was a feature of the .NET 2 security model, and has been (mostly) obsoleted in the .NET 4 model. Unless the evidence is needed elsewhere (eg. isolated storage), you can pass in null for this parameter. Conclusion That’s the basics of sandboxed appdomains. The most important object is the PermissionSet that defines the permissions available to assemblies running in the appdomain; it is this object that defines the appdomain as full or partial-trust. The appdomain also needs a default directory used for assembly lookups as the ApplicationBase parameter, and you can specify an optional list of the strongnames of assemblies that will be given full-trust permissions if they are loaded into the sandboxed appdomain. Next time, I’ll be looking closer at full-trust assemblies running in a sandboxed appdomain, and what you need to do to make an API available to partial-trust code.

    Read the article

  • creating a custom centered cyclic horizontal manager

    - by Hezi
    Trying to create a custom cyclical horizontal manager which will work as follows. It will control several field buttons where the buttons will always be positioned so that the focused button will be in the middle of the screen. As it is a cyclical manager once the focus moves to the right or left button, it will move to the center of the screen and all the buttons will move accordingly (and the last button will become the first to give it an cyclic and endless list feeling) Any idea how to address this? I tried doing this by implementing a custom manager which aligns the buttons according to the required layout. Each time moveFocus() is called I remove all fields (deleteAll() ) and add them again in the right order. Unfortunately this does not work.

    Read the article

  • Trying to access App.config file for mail settings but fails to work.

    - by mw
    Hello we have a Business Logic Layer which has an Email Services Class. In this class we have a method which will create an email(This part works and compiles fine). However when we try to access the app config file in order to test the method we get an error saying - Can't retrieve the app config mail settings and says all values are null when they are not. Here is the app config section for our code: <mailSettings> <smtp deliveryMethod="Network" from="[email protected]"> <network host="localhost" port="25" defaultCredentials="true"/> </smtp> </mailSettings> Here is there code we use to connect to the app.config: private System.Net.Configuration.MailSettingsSectionGroup mailSettings; SmtpClient client = new SmtpClient(mailSettings.Smtp.Network.Host, mailSettings.Smtp.Network.Port); What are we doing wrong here?

    Read the article

  • OpenPGP Signing

    - by singpolyma
    I'm reading RFC4880 in an attempt to produce an implementatdion of a subset of OpenPGP (RSA signatures) using http://phpseclib.sourceforge.net/. I have the publickey and compression-literal-signature packets parsed out. I can extract n and e and feed them to Crypt_RSA to construct a verifier. I tell it I'm using sha256. It then needs a "message" and a " signature" parametre. I get the signature data out of the signature packet no problem. The question I have is: what is "message"? According to sec tion 5.2.4 it's some combination of the literal data packet(s?) (their bodies or the whole packet?) and the "hashed" subpackets. Do I just concat all the data packets and the hashed packets together in the order they appear?

    Read the article

  • Windows Azure End to End Examples

    - by BuckWoody
    I’m fascinated by the way people learn. I’m told there are several methods people use to understand new information, from reading to watching, from experiencing to exploring. Personally, I use multiple methods of learning when I encounter a new topic, usually starting with reading a bit about the concepts. I quickly want to put those into practice, however, especially in the technical realm. I immediately look for examples where I can start trying out the concepts. But I often want a “real” example – not just something that represents the concept, but something that is real-world, showing some feature I could actually use. And it’s no different with the Windows Azure platform – I like finding things I can do now, and actually use. So when I started learning Windows Azure, I of course began with the Windows Azure Training Kit – which has lots of examples and labs, presentations and so on. But from there, I wanted more examples I could learn from, and eventually teach others with. I was asked if I would write a few of those up, so here are the ones I use. CodePlex CodePlex is Microsoft’s version of an “Open Source” repository. Anyone can start a project, add code, documentation and more to it and make it available to the world, free of charge, using various licenses as they wish. Microsoft also uses this location for most of the examples we publish, and sample databases for SQL Server. If you search in CodePlex for “Azure”, you’ll come back with a list of projects that folks have posted, including those of us at Microsoft. The source code and documentation are there, so you can learn using actual examples of code that will do what you need. There’s everything from a simple table query to a full project that is sort of a “Corporate Dropbox” that uses Windows Azure Storage. The advantage is that this code is immediately usable. It’s searchable, and you can often find a complete solution to meet your needs. The disadvantage is that the code is pretty specific – it may not cover a huge project like you’re looking for. Also, depending on the author(s), you might not find the documentation level you want. Link: http://azureexamples.codeplex.com/site/search?query=Azure&ac=8    Tailspin Microsoft Patterns and Practices is a group here that does an amazing job at sharing standard ways of doing IT – from operations to coding. If you’re not familiar with this resource, make sure you read up on it. Long before I joined Microsoft I used their work in my daily job – saved a ton of time. It has resources not only for Windows Azure but other Microsoft software as well. The Patterns and Practices group also publishes full books – you can buy these, but many are also online for free. There’s an end-to-end example for Windows Azure using a company called “Tailspin”, and the work covers not only the code but the design of the full solution. If you really want to understand the thought that goes into a Platform-as-a-Service solution, this is an excellent resource. The advantages are that this is a book, it’s complete, and it includes a discussion of design decisions. The disadvantage is that it’s a little over a year old – and in “Cloud” years that’s a lot. So many things have changed, improved, and have been added that you need to treat this as a resource, but not the only one. Still, highly recommended. Link: http://msdn.microsoft.com/en-us/library/ff728592.aspx Azure Stock Trader Sometimes you need a mix of a CodePlex-style application, and a little more detail on how it was put together. And it would be great if you could actually play with the completed application, to see how it really functions on the actual platform. That’s the Azure Stock Trader application. There’s a place where you can read about the application, and then it’s been published to Windows Azure – the production platform – and you can use it, explore, and see how it performs. I use this application all the time to demonstrate Windows Azure, or a particular part of Windows Azure. The advantage is that this is an end-to-end application, and online as well. The disadvantage is that it takes a bit of self-learning to work through.  Links: Learn it: http://msdn.microsoft.com/en-us/netframework/bb499684 Use it: https://azurestocktrader.cloudapp.net/

    Read the article

  • Unique ID for WORD2007 paragraph

    - by Ganish
    Hello, I am writing large WORD2007 socuments, which are often being changed. I ahve to number paragraphs with stationary unique unmbers, that will not change while changing the documents. The numbers should be unique, and will not change even if previous numbers are deleted. The order of the list is not mandatory, and addition of a new number before existing numbers is possible (for instance: the sequence 1, 4, 3 means that paragraphs 1-3 were written, then #2 was deleted, then #5 was added. #3 was not affected by the later editing) The mechanism should be internal to the document, as I am working on line and off line. The numbers are allocated to every document indovidually. Since I don't know to program under WORD, I'd appreciate getting complete solution. REgards Ganish

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • Microsoft T-SQL Counting Consecutive Records

    - by JeffW
    Problem: From the most current day per person, count the number of consecutive days that each person has received 0 points for being good. Sample data to work from : Date Name Points 2010-05-07 Jane 0 2010-05-06 Jane 1 2010-05-07 John 0 2010-05-06 John 0 2010-05-05 John 0 2010-05-04 John 0 2010-05-03 John 1 2010-05-02 John 1 2010-05-01 John 0 Expected answer: Jane was bad on 5/7 but good the day before that. So Jane was only bad 1 day in a row most recently. John was bad on 5/7, again on 5/6, 5/5 and 5/4. He was good on 5/3. So John was bad the last 4 days in a row. Code to create sample data: IF OBJECT_ID('tempdb..#z') IS NOT NULL BEGIN DROP TABLE #z END select getdate() as Date,'John' as Name,0 as Points into #z insert into #z values(getdate()-1,'John',0) insert into #z values(getdate()-2,'John',0) insert into #z values(getdate()-3,'John',0) insert into #z values(getdate()-4,'John',1) insert into #z values(getdate(),'Jane',0) insert into #z values(getdate()-1,'Jane',1) select * from #z order by name,date desc

    Read the article

  • Diophantine Equation

    - by lakshmi
    Show that it is possible to buy exactly 50, 51, 52, 53, 54, and 55 McNuggets, by finding solutions to the Diophantine equation. You can solve this in your head, using paper and pencil, or writing a program. However you chose to solve this problem, list the combinations of 6, 9 and 20 packs of McNuggets you need to buy in order to get each of the exact amounts. Given that it is possible to buy sets of 50, 51, 52, 53, 54 or 55 McNuggets by combinations of 6, 9 and 20 packs, show that it is possible to buy 56, 57,..., 65 McNuggets. In other words, show how, given solutions for 50-55, one can derive solutions for 56-65.python

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • CALayer drawInContext vs addSublayer

    - by Michael
    How can I use both of these in the same UIView correctly? I have one custom subclassed CALayer in which I draw a pattern within drawInContext I have a another in which I set an overlay PNG image as the contents. I have a third which is just a background. How do I overlay all 3 of these items? [self.layer addSublayer:bottomLayer]; // this line is the problem [squaresLayer drawInContext:viewContext]; [self.layer addSublayer:imgLayer]; The other 2 by themselves draw correctly if I do them in that order. No matter where I try and put bottomLayer, it always prevents squaresLayer from drawing. The reason I need 3 layers is I intend to animate the colors in the background and custom layers. The top layer is just a graphical overlay.

    Read the article

  • Rails: periodically use HEAD to check for page changes

    - by Chris
    I have a Rails app implementing a game, so it's expected that a player will leave a browser open at the game page. When player Alan takes an action himself, I use AJAX requests to update the game page Alan is viewing to reflect the new state. However, when another player (Bob) takes an action, I don't have (or want) a mechanism to push the change to Alan's view. I would like Alan's page to periodically poll the Rails server to find if there have been any changes since last reload, and to reload the page (either via a whole-page GET or an AJAX call) if not. In order to play nicely with caches and proxies, I'd like to do this by issuing a periodic HTTP HEAD request, get Rails to work out the timestamp of the last change (trivially available from my DB) and respond with that in the Last-Modified header; then have the client-side act on that timestamp. How can I go about doing this?

    Read the article

  • Best Allocation Algorithm

    - by shaju
    Hi, I am looking for a best allocation algorithm for the below scenario. We have requirement for say 18 pieces. I have the stock in my shelf as follows. Bin A - 10 Bin B - 6 Bin C - 3 Bin D - 4 Algorithm should propose the bins in the following order Bin A(10) , Bin D (4), Bin C (3) Real scenario we have n number of bins with different quantities.We need to find the optimal combination. Objective is to maxmize the allocation quantity. Can you please help. Regards, Shaju

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >