Search Results

Search found 25535 results on 1022 pages for 'automatic update'.

Page 60/1022 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Session Update from IASA 2010

    - by [email protected]
    Below: Tom Kristensen, senior vice president at Marsh US Consumer, and Roger Soppe, CLU, LUTCF, senior director of insurance strategy, Oracle Insurance. Tom and Roger participated in a panel discussion on policy administration systems this week at IASA 2010. This week was the 82nd Annual IASA Educational Conference & Business Show held in Grapevine, Texas. While attending the conference, I had the pleasure of serving as a panelist in one of many of the outstanding sessions conducted this year. The session - entitled "Achieving Business Agility and Promoting Growth with a Modern Policy Administration System" - included industry experts Steve Forte from OneShield, Mike Sciole of IFG Companies, and Tom Kristensen, senior vice president at Marsh US Consumer. The session was conducted as a panel discussion and focused on how insurers can leverage best practices to mitigate risk while enabling rapid product innovation through a modern policy administration system. The panelists offered insight into business and technical challenges for both Life & Annuity and Property & Casualty carriers. The session had three primary learning objectives: Identifying how replacing a legacy system with a more modern policy administration solution can deliver agility and growth Identifying how processes and system should be re-engineered or replaced in order to improve speed-to-market and product support Uncovering how to leverage best practices to mitigate risk during a migration to a new platform Tom Kristensen, who is an industry veteran with over 20 years of experience, was able was able to offer a unique perspective as a business process outsourcer (BPO). Marsh US Consumer is currently implementing both the Oracle Insurance Policy Administration solution and the Oracle Revenue Management and Billing platform while at the same time implementing a new BPO customer. Tom offered insight on the need to replace their aging systems and Marsh's ability to drive new products and processes with a modern solution. As a best practice, their current project has empowered their business users to play a major role in both the requirements gathering and configuration phases. Tom stated that working with a modern solution has also enabled his organization to use a more agile implementation methodology and get hands-on experience with the software earlier in the project. He also indicated that Marsh was encouraged by how quickly it will be able to implement new products, which is another major advantage of a modern rules-based system. One of the more interesting issues was raised by an audience member who asked, "With all the vendor solutions available in North American and across Europe, what is going to make some of them more successful than others and help ensure their long term success?" Panelist Mike Sciole, IFG Companies suggested that carriers do their due diligence and follow a structured evaluation process focusing on vendors who demonstrate they have the "cash to invest in long term R&D" and evaluate audited annual statements for verification. Other panelists suggested that the vendor space will continue to evolve and those with a strong strategy focused on the insurance industry and a solid roadmap will likely separate themselves from the rest. The session concluded with the panelists offering advice about not being afraid to evaluate new modern systems. While migrating to a new platform can be challenging and is typically only undertaken every 15+ years by carriers, the ability to rapidly deploy and manage new products, create consistent processes to better service customers, and the ability to manage their business more effectively, transparently and securely are well worth the effort. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • Favorite Visual Studio 2010 Extensions, Update

    - by Scott Dorman
    With the release of the Visual Studio Pro Power Tools (and many other new extensions having been released), my list of favorite Visual Studio extensions has changed. All of these extensions are available in the Visual Studio Gallery. Here is the list of extensions that I currently have installed and find useful: Bing Start Page CodeCompare Collapse Selection In Solution Explorer Collapse Solution Color Picker Completion Extension Analyzer Find Results Highlighter Find Results Tweak (Available from CodePlex) Format Document HelpViewerKeywordIndex HighlightMultiWord Image Insertion Indentation Matcher Extension ItalicComments MoveToRegionVSX Numbered Bookmarks PowerCommands for Visual Studio 2010 Regular Expressions Margin Search Work Items for TFS 2010 Source Outliner Spell Checker Structure Adornment This also installs the following extensions: BlockTagger BlockTaggerImpl SettingsStore SettingsStoreImpl StyleCop Team Founder Server Power Tools TFS Auto Shelve Visual Studio Color Theme Editor Visual Studio Pro Power Tools VS10x Code Map VS10x Code Marker VS10x Collapse All Projects VS10x Editor View Enhancer VS10x Insert Debug Names VS10x Selection Popup VS10x Super Copy Paste VSCommands 2010 Word Wrap with Auto-Indent   Technorati Tags: Visual Studio,Extensions

    Read the article

  • Asp.Net MVC does automatic model validation for DateTime but no others

    - by MattSlay
    I'm using MVC 2 with some Models from a LinqToSql project that I built. I see that when I post back to a Controller Action after editing a form that has a DateTime field from the Model, the MVC Html.ValidationMessageFor() helper will nicely display an error beside the Date text box. This seems to happen automatically when the you test ModelState.IsValid() in the Controller Action, as if the MVC model binding automatically knows that the DateTime field cannot be empty. My question is... I have some other string fields in these LinqToSql generated classes that are Not-Nullable (marked as Not Nullable in Sql Server which passes thourgh to the LinqToSql generated classes), so why doesn't Mr. MVC pick up on those as well and display a "Required" message in the ValidationMessageFor() placeholders I have added for those fields? Sure, I have successfully added the MetadataType(typeof) buddy classes to cover these Non-nullable string fields, but it sure does seem redundant to add all this metadata in buddy classes when the LinqToSql generated classes already contain enough info that MVC could sniff out. It MVC validation works with DateTime automatically, why not these Not-nullable fields too?

    Read the article

  • Sesame update du jour: SL 4, OOB, Azure, and proxy support

    - by Fabrice Marguerie
    I've just published a new version of Sesame Data Browser. Here's what's new this time: Upgraded to Silverlight 4 Can run out-of-browser (OOB), with elevated permissions. This gives you an icon on your desktop and enables new scenarios. Note: The application is unsigned for the moment. Support for Windows Azure authentication Support for SQL Azure authentication If you are behind a proxy that requires authentication, just give Sesame a new try after clicking on "If you are behind a proxy that requires authentication, please click here" An icon and a button for closing connections are now displayed on connection tabsSome less visible improvements Here is the connection view with anonymous access: If you want to access Windows Azure tables as OData, all you have to do is use your table storage endpoint as the URL, and provide your access key: A Windows Azure table storage address looks like this: http://<your account>.table.core.windows.net/ If you want to browse your SQL Azure databases with Sesame, you have to enable OData support for them at https://www.sqlazurelabs.com/ConfigOData.aspx. I won't show how it works because it's already been done in several places over the Web. Here are pointers: OData.org: Got SQL Azure? Then you've got OData OakLeaf Systems: Enabling and Using the OData Protocol with SQL Azure Patrick Verbruggen: Creating an OData feed for your Azure databases Shawn Wildermuth: SQL Azure's OData Support Jack Greenfield: How to Use OData for SQL Azure with AppFabric Access Control You can choose to enable anonymous access or not. When you don't enable anonymous access, you have to provide an Issuer name and a Secret key, and optionally an Security Token Service (STS) endpoint: Excerpt from Jack Greenfield's blog: To enable OData access to the currently selected database, check the box labeled "Enable OData". When OData access is enabled, database user mapping information is displayed at the bottom of the form.Use the drop down list labeled "Anonymous Access User" to select an anonymous access user. If an anonymous access user is selected, then all queries against the database presented without credentials will execute by impersonating that user. You can access the database as the anonymous user by clicking on the link provided at the bottom of the page. If no anonymous access user is selected, then the OData Service will not allow anonymous access to the database.Click the link labeled "Add User" to add a user for authenticated access. In the pop up panel, select the user from the drop down list. Leave the issuer name empty for simple authentication, or provide the name of a trusted Security Token Service (STS) for federated authentication. For example, to federate with another ACS based STS, provide the base URI for the STS endpoint displayed by the Windows Azure AppFabric Portal for the STS.Click the "OK" button to complete the configuration process and dismiss the pop up panel. When one or more authenticated access users are added, the OData Service will impersonate them when appropriate credentials are presented. You can designate as many authenticated access users as you like. The OData Service will decide which one to impersonate for each query by inspecting the credentials presented with the query.Next time I'll give an overview of how Sesame Data Browser is built.In the meantime, happy data browsing!

    Read the article

  • Sesame update du jour: SL 4, OOB, Azure, and proxy support

    I've just published a new version of Sesame Data Browser. Here's what's new this time: Upgraded to Silverlight 4 Can run out-of-browser (OOB), with elevated permissions. This gives you an icon on your desktop and enables new scenarios. Note: The application is unsigned for the moment. Support for Windows Azure authentication Support for SQL Azure authentication If you are behind a proxy that requires authentication, just give Sesame a new try after clicking on "If you are behind a proxy that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Automatic (keyless) styles not applied when in a MergedDictionaries in another assembly

    - by Catalin DICU
    I moved the styles and templates xaml files form my application (.exe) project to a library project (.dll) because I want to use them in multiple applications. In App.xaml I have: <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="pack://application:,,,/MyApplication.Common;component/Resources/All.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> In All.xaml (in the Common assembly) : <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Styles.xaml" /> <ResourceDictionary Source="Templates.xaml" /> <ResourceDictionary Source="Converters.xaml" /> </ResourceDictionary.MergedDictionaries> With this code keyless styles in Styles.xaml aren't applied. Instead, if I reference them directly in App.xaml, it works : <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="pack://application:,,,/MyApplication.Common;component/Resources/Styles.xaml" /> <ResourceDictionary Source="pack://application:,,,/MyApplication.Common;component/Resources/Templates.xaml" /> <ResourceDictionary Source="pack://application:,,,/MyApplication.Common;component/Resources/Converters.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> Can anyone explain why it happens ?

    Read the article

  • Nested forms and automatic creation of parent, children

    - by Karen
    Guys, I was wondering if it was possible to create new parent, children in a has many relationship, using rails nested forms. Rails documentation clearly says that this works in a one to one relationship. Not sure if its the same in has many relationship. For example: If params = { :employee => { :name => "Tester", :account_attributes => {:login => 'tester'} } } works as one to one relationship. So Employee.new(params) works fine. New employee, account are created. Supposing I had params = { :employee => { :name => "Tester", :account_attributes => { "0" => {:login => 'tester'}, "1" => {:login => 'tester2'} } } } Employee.new(params) doesnt work. It fails on child validations saying parent cant be blank. Any help is appreciated. Thanks Karen

    Read the article

  • EF 4 Pluralization Update

    - by Ken Cox [MVP]
    I previously wrote about playing with EF 4’s PluralizationService class . Now that OrcsWeb is running ASP.NET 4, you can play with my little pluralization page and its WCF service online. The source code (such as it is!) can be downloaded from the MSDN Code Gallery here: http://code.msdn.microsoft.com/PluralizationService BTW, one annoyance is that the WDSL still includes the default namespace:  namespace="http://tempuri.org/" I swatted a couple of these instances, but if you know...(read more)

    Read the article

  • EF 4 Pluralization Update

    I previously wrote about playing with EF 4s PluralizationService class . Now that OrcsWeb is running ASP.NET 4, you can play with my little pluralization page and its WCF service online. The source code (such as it is!) can be downloaded from the MSDN...(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • Update pizza orders list [on hold]

    - by tirengarfio
    I have to create a website to order pizzas using PHP, MySQL, javascript, etc. I have to create also an android app for the owner of the restaurant, so when someone order a pizza, the android app show the new order on the list of the orders. Hwo to do this? Should I use push notifications? If yes, what happens when the connection su**s and the device is not connected at the moment of the push? Or should I use pull requests from the android device like every 10 seconds?

    Read the article

  • Update of SAE Benchmark Presentation to M6/T5/ZFS

    - by uwes
    Strategic Applications Engineering (SAE) published in March an updated Benchmark presentation showing the performance of Oracle systems, software and Virtualization. SPARC M6/T5/ZFS Benchmarks March 2014 The presentation is available via our eSTEP portal.  You will need to provide your email address and the pin below to access the downloads. Link to the portal is shown below. URL: http://launch.oracle.com/ PIN: eSTEP_2011 The material can be found under tab eSTEP Download Located under: Recent Updates and Miscellaneous

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • Get id when inserting new row using TableAdapter.Update on a file based database

    - by phq
    I have a database table with one field, called ID, being an auto increment integer. Using a TableAdapter I can read and modify existing rows as well as create new ones. However if I try to modify a newly inserted row I get an DBConcurrencyException: OleDbConnection conn = new OleDbConnection(@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Shift.mdb;Persist Security Info=True"); ShiftDataSetTableAdapters.ShiftTableAdapter shiftTA = new ShiftDataSetTableAdapters.ShiftTableAdapter(); shiftTA.Connection = conn; ShiftDataSet.ShiftDataTable table = new ShiftDataSet.ShiftDataTable(); ShiftDataSet.ShiftRow row = table.NewShiftRow(); row.Name = "life"; table.Rows.Add(row); shiftTA.Update(row); // row.ID == -1 row.Name = "answer"; // <-- all fine up to here shiftTA.Update(row); // DBConcurrencyException: 0 rows affected Separate question, is there any static type of the NewShiftRow() method I can use so that I don't have to create table everytime I want to insert a new row. I guess the problem in the code comes from row.ID that is still -1 after the first Update() call. The Insert is successful and in the database the row has a valid value of ID. How can I get that ID so that I can continue with the second Update call? Update: IT looks like this could have been done automatically using this setting. However according to the answer on msdn social, OLEDB drivers do not support this feature. Not sure where to go from here, use something else than oledb? Update: Tried SQLCompact but discovered that it had the same limitation, it does not support multiple statements. Final question: is there any simple(single file based) database that would allow you to get the values of a inserted row.

    Read the article

  • SQLAuthority News – #SQLPASS 2012 Seattle Update – Memorylane 2009, 2010, 2011

    - by pinaldave
    Today is the first day of the SQLPASS 2012 and I will be soon posting SQL Server 2012 experience over here. Today when I landed in Seattle, I got the nostalgia feeling. I used to stay in the USA. I stayed here for more than 7 years – I studied here and I worked in USA. I had lots of friends in Seattle when I used to stay in the USA. I always wanted to visit Seattle because it is THE place. I remember once I purchased a ticket to travel to Seattle through Priceline (well it was the cheapest option and I was a student) but could not fly because of an interesting issue. I used to be Teaching Assistant of an advanced course and the professor asked me to build a pop-quiz for the course. I unfortunately had to cancel the trip. Before I returned to India – I pretty much covered every city existed in my list to must visit, except one – Seattle. It was so interesting that I never made it to Seattle even though I wanted to visit, when I was in USA. After that one time I never got a chance to travel to Seattle. After a few years I also returned to India for good. Once on Television I saw “Sleepless in Seattle” movie playing and I immediately changed the channel as it reminded me that I never made it to Seattle before. However, destiny has its own way to handle decisions. After I returned to India – I visited Seattle total of 5 times and this is my 6th visit to Seattle in less than 3 years. I was here for 3 previous SQLPASS events – 2009, 2010, and 2011 as well two Microsoft Most Valuable Professional Summit in 2009 and 2010. During these five trips I tried to catch up with all of my all friends but I realize that time has its own way of doing things. Many moved out of Seattle and many were too busy revive the old friendship but there were few who always make a point to meet me when I travel to the city. During the course of my visits I have made few fantastic new friends – Rick Morelan (Joes 2 Pros) and Greg Lynch. Every time I meet them I feel that I know them for years. I think city of Seattle has played very important part in our relationship that I got these fantastic friends. SQLPASS is the event where I find all of my SQL Friends and I look for this event for an entire year. This year’s my goal is to meet as many as new friends I can meet. If you are going to be at SQLPASS – FIND ME. I want to have a photo with you. I want to remember each name as I believe this is very important part of our life – making new friends and sustaining new friendship. Here are few of the pointers where you can find me. All Keynotes – Blogger’s Table Exhibition Booth Joes 2 Pros Booth #117 – Do not forget to stop by at the booth – I might have goodies for you – limited editions. Book Signing Events – Check details in tomorrow’s blog or stop by Booth #117 Evening Parties 6th Nov – Welcome Reception Evening Parties 7th Nov - Exhibitor Reception – Do not miss Booth #117 Evening Parties 8th Nov - Community Appreciation Party Additionally at few other locations – Embarcadero Booth In Coffee shops in Convention Center If you are SQLPASS – make sure that I find an opportunity to meet you at the event. Reserve a little time and lets have a coffee together. I will be continuously tweeting about my where about on twitter so let us stay connected on twitter. Here is my experience of my earlier experience of attending SQLPASS. SQLAuthority News – Book Signing Event – SQLPASS 2011 Event Log SQLAuthority News – Meeting SQL Friends – SQLPASS 2011 Event Log SQLAuthority News – Story of Seattle – SQLPASS 2011 Event Log SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience SQLAuthority News – Notes of Excellent Experience at SQL PASS 2009 Summit, Seattle Let us meet! Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • An Update on JD Edwards EnterpriseOne Products

    Gary Grieshaber, Director of Product Strategy for EnterpriseOne, speaks with Cliff about the new Lifetime Support Option that was announced at OOW, the future of EnterpriseOne and what he recommends customers who are running EnterpriseOne Xe or 8 releases do today. Gary also chats with Cliff about the highlights of the 8.95 release and what the certification for the Oracle Fusion middleware means to the customers using EnterpriseOne Tools.

    Read the article

  • Autotools automatic invocation of lcov after 'make check'

    - by disown
    I have successfully set up an autotools project where the tests compiles with instrumentation so I can get a test coverage report. I can get the report by running lcov in the source dir after a successful 'make check'. I now face the problem that I want to automate this step. I would like to add this to 'make check' or to make it a separate goal 'make check-coverage'. Ideally I would like to parse the result and fail if the coverage falls below a certain percentage. Problem is that I cannot figure out how to add a custom target at all. The closest I got was finding this example autotools config, but I can't see where in that project the goal 'make lcov' is added. I can only see some configure flags in m4/auxdevel.m4. Any tips?

    Read the article

  • Bluetooth refuses to connect since update to Ubuntu Gnome 13.10

    - by Niklas Berg
    I can no longer connect to my bluetooth speakers since since upgrading to Ubuntu Gnome 13.10 and then Gnome shell to 3.10, which never were a problem with Ubuntu Gnome 13.04. My bluetooth-dongle seems to working fine and I can even detect and add the speakers (Creative D100) but when I try to slide the button from off to on in the bluetooth settings it just slides back to off. The "bluetooth-B" in the upper right corner is also gone. I actually managed to connect after I added "Enable=Socket" under "[general]" /etc/bluetooth/audio.conf and the indicator on the speakers confirms the connection, but I cannot find the speakers in the audio settings even then. I've tried to solve this for several days, reading tons of other possibly related questions here on ask ubuntu and elsewhere but am unable to find a solution. Any ideas?

    Read the article

  • Update to 13.10: blank screen and repeated suspend on resume from suspend

    - by user208026
    On my Asus x201e, after updating from 13.04 to 13.10, intermittently when awaking from suspend, my screen will blink to a black screen a few times, offers a lock screen, and then go back to suspend unexpectedly (during this it is possible to quickly login to the desktop before the system suspends again). This will repeat each time I subsequently awake it from suspend. Only a restart will escape the suspend loop. The problem only occurs when suspending by closing lid, not by manual (pm-suspend) or menu suspend. Opening the lid after a manual or menu-initiated suspend works as expected. When an external monitor is attached, the black screen and second lock screen still appear twice upon wake, but the system does not fall back into suspend (though networking is disabled and needs to be restarted). This issue arose in tandem with the already raised issue regarding networking not restarting on wake from suspend, though appears to be distinct from that issue and persists after a fix is applied for the networking issue. This issue only occurs on waking from suspend, never upon booting. And only on the second suspend cycle after reboot. This question duplicates the second, unanswered question included in this closed question and seems to correspond with this reported bug. Any ideas for a workaround?

    Read the article

  • Count a row VS Save the Row count after each update

    - by SAFAD
    I want to know whether saving row count in a table is better than counting it each time of the proccess. Quick Example : A visitor goes to Group Clan, the page displays clan information and Members who have joined the group,Should the page look for all the users who joined the clan and count them, or just display the number of members already saved in table ? I think the first one is not possible to get manipulated with but IT MIGHT cost performance Your Ideas ?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >