Search Results

Search found 8128 results on 326 pages for 'daily tips'.

Page 320/326 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • Summit Time!

    - by Ajarn Mark Caldwell
    Boy, how time flies!  I can hardly believe that the 2011 PASS Summit is just one week away.  Maybe it snuck up on me because it’s a few weeks earlier than last year.  Whatever the cause, I am really looking forward to next week.  The PASS Summit is the largest SQL Server conference in the world and a fantastic networking opportunity thrown in for no additional charge.  Here are a few thoughts to help you maximize the week. Networking As Karen Lopez (blog | @DataChick) mentioned in her presentation for the Professional Development Virtual Chapter just a couple of weeks ago, “Don’t wait until you need a new job to start networking.”  You should always be working on your professional network.  Some people, especially technical-minded people, get confused by the term networking.  The first image that used to pop into my head was the image of some guy standing, awkwardly, off to the side of a cocktail party, trying to shmooze those around him.  That’s not what I’m talking about.  If you’re good at that sort of thing, and you can strike up a conversation with some stranger and learn all about them in 5 minutes, and walk away with your next business deal all but approved by the lawyers, then congratulations.  But if you’re not, and most of us are not, I have two suggestions for you.  First, register for Don Gabor’s 2-hour session on Tuesday at the Summit called Networking to Build Business Contacts.  Don is a master at small talk, and at teaching others, and in just those two short hours will help you with important tips about breaking the ice, remembering names, and smooth transitions into and out of conversations.  Then go put that great training to work right away at the Tuesday night Welcome Reception and meet some new people; which is really my second suggestion…just meet a few new people.  You see, “networking” is about meeting new people and being friendly without trying to “work it” to get something out of the relationship at this point.  In fact, Don will tell you that a better way to build the connection with someone is to look for some way that you can help them, not how they can help you. There are a ton of opportunities as long as you follow this one key point: Don’t stay in your hotel!  At the least, get out and go to the free events such as the Tuesday night Welcome Reception, the Wednesday night Exhibitor Reception, and the Thursday night Community Appreciation Party.  All three of these are perfect opportunities to meet other professionals with a similar job or interest as you, and you never know how that may help you out in the future.  Maybe you just meet someone to say HI to at breakfast the next day instead of eating alone.  Or maybe you cross paths several times throughout the Summit and compare notes on different sessions you attended.  And you just might make new friends that you look forward to seeing year after year at the Summit.  Who knows, it might even turn out that you have some specific experience that will help out that other person a few months’ from now when they run into the same challenge that you just overcame, or vice-versa.  But the point is, if you don’t get out and meet people, you’ll never have the chance for anything else to happen in the future. One more tip for shy attendees of the Summit…if you can’t bring yourself to strike up conversation with strangers at these events, then at the least, after you sit through a good session that helps you out, go up to the speaker and introduce yourself and thank them for taking the time and effort to put together their presentation.  Ideally, when you do this, tell them WHY it was beneficial to you (e.g. “Now I have a new idea of how to tackle a problem back at the office.”)  I know you think the speakers are all full of confidence and are always receiving a ton of accolades and applause, but you’re wrong.  Most of them will be very happy to hear first-hand that all the work they put into getting ready for their presentation is paying off for somebody. Training With over 170 technical sessions at the Summit, training is what it’s all about, and the training is fantastic!  Of course there are the big-name trainers like Paul Randall, Kimberly Tripp, Kalen Delaney, Itzik Ben-Gan and several others, but I am always impressed by the quality of the training put on by so many other “regular” members of the SQL Server community.  It is amazing how you don’t have to be a published author or otherwise recognized as an “expert” in an area in order to make a big impact on others just by sharing your personal experience and lessons learned.  I would rather hear the story of, and lessons learned from, “some guy or gal” who has actually been through an issue and came out the other side, than I would a trained professor who is speaking just from theory or an intellectual understanding of a topic. In addition to the three full days of regular sessions, there are also two days of pre-conference intensive training available.  There is an extra cost to this, but it is a fantastic opportunity.  Think about it…you’re already coming to this area for training, so why not extend your stay a little bit and get some in-depth training on a particular topic or two?  I did this for the first time last year.  I attended one day of extra training and it was well worth the time and money.  One of the best reasons for it is that I am extremely busy at home with my regular job and family, that it was hard to carve out the time to learn about the topic on my own.  It worked out so well last year that I am doubling up and doing two days or “pre-cons” this year. And then there are the DVDs.  I think these are another great option.  I used the online schedule builder to get ready and have an idea of which sessions I want to attend and when they are (much better than trying to figure this out at the last minute every day).  But the problem that I have run into (seems this happens every year) is that nearly every session block has two different sessions that I would like to attend.  And some of them have three!  ACK!  That won’t work!  What is a guy supposed to do?  Well, one option is to purchase the DVDs which are recordings of the audio and projected images from each session so you can continue to attend sessions long after the Summit is officially over.  Yes, many (possibly all) of these also get posted online and attendees can access those for no extra charge, but those are not necessarily all available as quickly as the DVD recording are, and the DVDs are often more convenient than downloading, especially if you want to share the training with someone who was not able to attend in person. Remember, I don’t make any money or get any other benefit if you buy the DVDs or from anything else that I have recommended here.  These are just my own thoughts, trying to help out based on my experiences from the 8 or so Summits I have attended.  There is nothing like the Summit.  It is an awesome experience, fantastic training, and a whole lot of fun which is just compounded if you’ll take advantage of the first part of this article and make some new friends along the way.

    Read the article

  • Get Application Title from Windows Phone

    - by psheriff
    In a Windows Phone application that I am currently developing I needed to be able to retrieve the Application Title of the phone application. You can set the Deployment Title in the Properties of your Windows Phone Application, however getting to this value programmatically can be a little tricky. This article assumes that you have Visual Studio 2010 and the Windows Phone tools installed along with it. The Windows Phone tools must be downloaded separately and installed with Visual Studio2010. You may also download the free Visual Studio2010 Express for Windows Phone developer environment. The WMAppManifest.xml File First off you need to understand that when you set the Deployment Title in the Properties windows of your Windows Phone application, this title actually gets stored into an XML file located under the \Properties folder of your application. This XML file is named WMAppManifest.xml. A portion of this file is shown in the following listing. <?xml version="1.0" encoding="utf-8"?><Deployment  http://schemas.microsoft.com/windowsphone/2009/deployment"http://schemas.microsoft.com/windowsphone/2009/deployment"  AppPlatformVersion="7.0">  <App xmlns=""       ProductID="{71d20842-9acc-4f2f-b0e0-8ef79842ea53}"       Title="Mobile Time Track"       RuntimeType="Silverlight"       Version="1.0.0.0"       Genre="apps.normal"       Author="PDSA, Inc."       Description="Mobile Time Track"       Publisher="PDSA, Inc."> ... ...  </App></Deployment> Notice the “Title” attribute in the <App> element in the above XML document. This is the value that gets set when you modify the Deployment Title in your Properties Window of your Phone project. The only value you can set from the Properties Window is the Title. All of the other attributes you see here must be set by going into the XML file and modifying them directly. Note that this information duplicates some of the information that you can also set from the Assembly Information… button in the Properties Window. Why Microsoft did not just use that information, I don’t know. Reading Attributes from WMAppManifest I searched all over the namespaces and classes within the Windows Phone DLLs and could not find a way to read the attributes within the <App> element. Thus, I had to resort to good old fashioned XML processing. First off I created a WinPhoneCommon class and added two static methods as shown in the snippet below: public class WinPhoneCommon{  /// <summary>  /// Returns the Application Title   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application title</returns>  public static string GetApplicationTitle()  {    return GetWinPhoneAttribute("Title");  }   /// <summary>  /// Returns the Application Description   /// from the WMAppManifest.xml file  /// </summary>  /// <returns>The application description</returns>  public static string GetApplicationDescription()  {    return GetWinPhoneAttribute("Description");  }   ... GetWinPhoneAttribute method here ...} In your Windows Phone application you can now simply call WinPhoneCommon.GetApplicationTitle() or WinPhone.GetApplicationDescription() to retrieve the Title or Description properties from the WMAppManifest.xml file respectively. You notice that each of these methods makes a call to the GetWinPhoneAttribute method. This method is shown in the following code snippet: /// <summary>/// Gets an attribute from the Windows Phone WMAppManifest.xml file/// To use this method, add a reference to the System.Xml.Linq DLL/// </summary>/// <param name="attributeName">The attribute to read</param>/// <returns>The Attribute's Value</returns>private static string GetWinPhoneAttribute(string attributeName){  string ret = string.Empty;   try  {    XElement xe = XElement.Load("WMAppManifest.xml");    var attr = (from manifest in xe.Descendants("App")                select manifest).SingleOrDefault();    if (attr != null)      ret = attr.Attribute(attributeName).Value;  }  catch  {    // Ignore errors in case this method is called    // from design time in VS.NET  }   return ret;} I love using the new LINQ to XML classes contained in the System.Xml.Linq.dll. When I did a Bing search the only samples I found for reading attribute information from WMAppManifest.xml used either an XmlReader or XmlReaderSettings objects. These are fine and work, but involve a little extra code. Instead of using these, I added a reference to the System.Xml.Linq.dll, then added two using statements to the top of the WinPhoneCommon class: using System.Linq;using System.Xml.Linq; Now, with just a few lines of LINQ to XML code you can read to the App element and extract the appropriate attribute that you pass into the GetWinPhoneAttribute method. Notice that I added a little bit of exception handling code in this method. I ignore the exception in case you call this method in the Loaded event of a user control. In design-time you cannot access the WMAppManifest file and thus an exception would be thrown. Summary In this article you learned how to retrieve the attributes from the WMAppManifest.xml file. I use this technique to grab information that I would otherwise have to hard-code in my application. Getting the Title or Description for your Windows Phone application is easy with just a little bit of LINQ to XML code. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose Tips & Tricks, then "Get Application Title from Windows Phone" from the drop-down. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **Visit http://www.pdsa.com/Event/Blog for a free video on Silverlight entitled Silverlight XAML for the Complete Novice - Part 1.  

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Inventory Management concepts in XNA game

    - by user1332755
    I am trying to code the inventory system in my first real game so I have very little experience in both c# and game engine development. Basically, I need some general guidance and tips with how to structure and organize these sorts of systems. Please tell me if I am on the right track or not before I get too deep into making some badly structured system. It's fine if you don't feel like looking through my code, suggestions about general structure would also be appreciated. What I am aiming to end up with is some sort of system like Minecraft or Terraria. It must include: main inventory GUI (items can be dragged and placed in whatever slot desired Itembar outside of the main inventory which can be assigned to certain items the ability to use items from either location So far, I have 4 main classes: Inventory holds the general info and methods, inventoryslot holds info for individual slots, Itembar holds all info and methods for itself, and finally, ItemManager to manage interactions between the two and hold a master list of items. So far, my itembar works perfectly and interacts well with mousedragging items into and out of it as well as activating the item effect. Here is the code I have so far: (there is a lot but I will try to keep it relevant) This is the code for the itembar on the main screen: class Itembar { public Texture2D itembarfull, iSelected; public static Rectangle itembar = new Rectangle(5, 218, 40, 391); public Rectangle box1 = new Rectangle(itembar.X, 218, 40, 40); //up to 10 Rectangles for each slot public int Selected = 0; private ItemManager manager; public Itembar(Texture2D texture, Texture2D texture3, ItemManager mann) { itembarfull = texture; iSelected = texture3; manager = mann; } public void Update(GameTime gametime) { } public void Draw(SpriteBatch spriteBatch) { spriteBatch.Draw( itembarfull, new Vector2 (itembar.X, itembar.Y), null, Color.White, 0.0f, Vector2.Zero, 1.0f, SpriteEffects.None, 1.0f); if (Selected == 1) spriteBatch.Draw(iSelected, new Rectangle(box1.X-3, box1.Y-3, box1.Width+6, box1.Height+6), Color.White); //goes up to 10 slots } public int Box1Query() { foreach (Item item in manager.items) { if(box1.Contains(item.BoundingBox)) return manager.items.IndexOf(item); } return 999; } //10 different box queries It is working fine right now. I just put an Item in there and the box will query things like the item's effects, stack number, consumable or not etc...This one is basically almost complete. Here is the main inventory class: class Inventory { public bool isActive; public List<Rectangle> mainSlots = new List<Rectangle>(24); public List<InventorySlot> mainSlotscheck = new List<InventorySlot>(24); public static Rectangle inv = new Rectangle(841, 469, 156, 231); public Rectangle invfull = new Rectangle(inv.X, inv.Y, inv.Width, inv.Height); public Rectangle inv1 = new Rectangle(inv.X + 4, inv.Y +3, 32, 32); //goes up to inv24 resulting in a 6x4 grid of Rectangles public Inventory() { mainSlots.Add(inv1); mainSlots.Add(inv2); mainSlots.Add(inv3); mainSlots.Add(inv4); //goes up to 24 foreach (Rectangle slot in mainSlots) mainSlotscheck.Add(new InventorySlot(slot)); } //update and draw methods are empty because im not too sure what to put there public int LookforfreeSlot() { int slotnumber = 999; for (int x = 0; x < mainSlots.Count; x++) { if (mainSlotscheck[x].isFree) { slotnumber = x; break; } } return slotnumber; } } } LookforFreeSlot() method is meant to be called when I do AddtoInventory(). I'm kinda stumped about what other things I need to put in this class. Here is the inventorySlot class: (its main purpose is to check the bool "isFree" to see whether or not something already occupies the slot. But i guess it can also do other stuff like get item info.) class InventorySlot { public int X, Y; public int Width = 32, Height = 32; public Vector2 Position; public int slotnumber; public bool free = true; public int? content = null; public bool isFree { get { return free; } set { free = value; } } public InventorySlot(Rectangle slot) { slot = new Rectangle(X, Y, Width, Height); } } } Finally, here is the ItemManager (I am omitting the master list because it is too long) class ItemManager { public List<Item> items = new List<Item>(20); public List<Item> inventory1 = new List<Item>(24); public List<Item> inventory2 = new List<Item>(24); public List<Item> inventory3 = new List<Item>(24); public List<Item> inventory4 = new List<Item>(24); public Texture2D icon, filta; private Rectangle msRect; MouseState mouseState; public int ISelectedIndex; Inventory inventory; SpriteFont font; public void GenerateItems() { items.Add(new Item(new Rectangle(0, 0, 32, 32), icon, font)); items[0].name = "Grass Chip"; items[0].itemID = 0; items[0].consumable = true; items[0].stackable = true; items[0].maxStack = 99; items.Add(new Item(new Rectangle(32, 0, 32, 32), icon, font)); //master list continues. it will generate all items in the game; } public ItemManager(Inventory inv, Texture2D itemsheet, Rectangle mouseRectt, MouseState ms, Texture2D fil, SpriteFont f) { icon = itemsheet; msRect = mouseRectt; filta = fil; mouseState = ms; inventory = inv; font = f; } //once again, no update or draw public void mousedrag() { items[0].DestinationRect = new Rectangle (msRect.X, msRect.Y, 32, 32); items[0].dragging = true; } public void AddtoInventory(Item item) { int index = inventory.LookforfreeSlot(); if (index == 999) return; item.DestinationRect = inventory.mainSlots[index]; inventory.mainSlotscheck[index].content = item.itemID; inventory.mainSlotscheck[index].isFree = false; item.IsActive = true; } } } The mousedrag works pretty well. AddtoInventory doesn't work because LookforfreeSlot doesn't work. Relevant code from the main program: When I want to add something to the main inventory, I do something like this: foreach (Particle ether in ether1.ethers) { if (ether.isCollected) itemmanager.AddtoInventory(itemmanager.items[14]); } This turned out to be much longer than I had expected :( But I hope someone is interested enough to comment.

    Read the article

  • 5 Best Practices - Laying the Foundation for WebCenter Projects

    - by Kellsey Ruppel
    Today’s guest post comes from Oracle WebCenter expert John Brunswick. John specializes in enterprise portal and content management solutions and actively contributes to the enterprise software business community and has authored a series of articles about optimal business involvement in portal, business process management and SOA development, examining ways of helping organizations move away from monolithic application development. We’re happy to have John join us today! Maximizing success with Oracle WebCenter portal requires a strategic understanding of Oracle WebCenter capabilities.  The following best practices enable the creation of portal solutions with minimal resource overhead, while offering the greatest flexibility for progressive elaboration. They are inherently project agnostic, enabling a strong foundation for future growth and an expedient return on your investment in the platform.  If you are able to embrace even only a few of these practices, you will materially improve your deployment capability with WebCenter. 1. Segment Duties Around 3Cs - Content, Collaboration and Contextual Data "Agility" is one of the most common business benefits touted by modern web platforms.  It sounds good - who doesn't want to be Agile, right?  How exactly IT organizations go about supplying agility to their business counterparts often lacks definition - hamstrung by ambiguity. Ultimately, businesses want to benefit from reduced development time to deliver a solution to a particular constituent, which is augmented by as much self-service as possible to develop and manage the solution directly. All done in the absence of direct IT involvement. With Oracle WebCenter's depth in the areas of content management, pallet of native collaborative services, enterprise mashup capability and delegated administration, it is very possible to execute on this business vision at a technical level. To realize the benefits of the platform depth we can think of Oracle WebCenter's segmentation of duties along the lines of the 3 Cs - Content, Collaboration and Contextual Data.  All three of which can have their foundations developed by IT, then provisioned to the business on a per role basis. Content – Oracle WebCenter benefits from an extremely mature content repository.  Work flow, audit, notification, office integration and conversion capabilities for documents (HTML & PDF) make this a haven for business users to take control of content within external and internal portals, custom applications and web sites.  When deploying WebCenter portal take time to think of areas in which IT can provide the "harness" for content to reside, then allow the business to manage any content items within the site, using the content foundation to ensure compliance with business rules and process.  This frees IT to work on more mission critical challenges and allows the business to respond in short order to emerging market needs. Collaboration – Native collaborative services and WebCenter spaces are a perfect match for business users who are looking to enable document sharing, discussions and social networking.  The ability to deploy the services is granular and on the basis of roles scoped to given areas of the system - much like the first C “content”.  This enables business analysts to design the roles required and IT to provision with peace of mind that users leveraging the collaborative services are only able to do so in explicitly designated areas of a site. Bottom line - business will not need to wait for IT, but cannot go outside of the scope that has been defined based on their roles. Contextual Data – Collaborative capabilities are most powerful when included within the context of business data.  The ability to supply business users with decision shaping data that they can include in various parts of a portal or portals, just as they would with content items, is one of the most powerful aspects of Oracle WebCenter.  Imagine a discussion about new store selection for a retail chain that re-purposes existing information from business intelligence services about various potential locations and or custom backend systems - presenting it directly in the context of the discussion.  If there are some data sources that are preexisting in your enterprise take a look at how they can be made into discrete offerings within the portal, then scoped to given business user roles for inclusion within collaborative activities. 2. Think Generically, Execute Specifically Constructs.  Anyone who has spent much time around me knows that I am obsessed with this word.  Why? Because Constructs offer immense power - more than APIs, Web Services or other technical capability. Constructs offer organizations the ability to leverage a platform's native characteristics to offer substantial business functionality - without writing code.  This concept becomes more powerful with the additional understanding of the concepts from the platform that an organization learns over time.  Let's take a look at an example of where an Oracle WebCenter construct can substantially reduce the time to get a subscription-based site out the door and into the hands of the end consumer. Imagine a site that allows members to subscribe to specific disciplines to access information and application data around that various discipline.  A space is a collection of secured pages within Oracle WebCenter.  Spaces are not only secured, but also default content stored within it to be scoped automatically to that space. Taking this a step further, Oracle WebCenter’s Activity Stream surfaces events, discussions and other activities that are scoped to the given user on the basis of their space affiliations.  In order to have a portal that would allow users to "subscribe" to information around various disciplines - spaces could be used out of the box to achieve this capability and without using any APIs or low level technical work to achieve this. 3. Make Governance Work for You Imagine driving down the street without the painted lines on the road.  The rules of the road are so ingrained in our minds, we often do not think about the process, but seemingly mundane lane markers are critical enablers. Lane markers allow us to travel at speeds that would be impossible if not for the agreed upon direction of flow. Additionally and more importantly, it allows people to act autonomously - going where they please at any given time. The return on the investment for mobility is high enough for people to buy into globally agreed up governance processes. In Oracle WebCenter we can use similar enablers to lane markers.  Our goal should be to enable the flow of information and provide end users with the ability to arrive at business solutions as needed, not on the basis of cumbersome processes that cannot meet the business needs in a timely fashion. How do we do this? Just as with "Segmentation of Duties" Oracle WebCenter technologies offer the opportunity to compartmentalize various business initiatives from each other within the system due to constructs and security that are available to use within the platform. For instance, when a WebCenter space is created, any content added within that space by default will be secured to that particular space and inherits meta data that is associated with a folder created for the space. Oracle WebCenter content uses meta data to support a broad range of rich ECM functionality and can automatically impart retention, workflow and other policies automatically on the basis of what has been defaulted for that space. Depending on your business needs, this paradigm will also extend to sub sections of a space, offering some interesting possibilities to enable automated management around content. An example may be press releases within a particular area of an extranet that require a five year retention period and need to the reviewed by marketing and legal before release.  The underlying content system will transparently take care of this process on the basis of the above rules, enabling peace of mind over unstructured data - which could otherwise become overwhelming. 4. Make Your First Project Your Second Imagine if Michael Phelps was competing in a swimming championship, but told right before his race that he had to use a brand new stroke.  There is no doubt that Michael is an outstanding swimmer, but chances are that he would like to have some time to get acquainted with the new stroke. New technologies should not be treated any differently.  Before jumping into the deep end it helps to take time to get to know the new approach - even though you may have been swimming thousands of times before. To quickly get a handle on Oracle WebCenter capabilities it can be helpful to deploy a sandbox for the team to use to share project documents, discussions and announcements in an effort to help the actual deployment get under way, while increasing everyone’s knowledge of the platform and its functionality that may be helpful down the road. Oracle Technology Network has made a pre-configured virtual machine available for download that can be a great starting point for this exercise. 5. Get to Know the Community If you are reading this blog post you have most certainly faced a software decision or challenge that was solved on the basis of a small piece of missing critical information - which took substantial research to discover.  Chances were also good that somewhere, someone had already come across this information and would have been excited to share it. There is no denying the power of passionate, connected users, sharing key tips around technology.  The Oracle WebCenter brand has a rich heritage that includes industry-leading technology and practitioners.  With the new Oracle WebCenter brand, opportunities to connect with these experts has become easier. Oracle WebCenter Blog Oracle Social Enterprise LinkedIn WebCenter Group Oracle WebCenter Twitter Oracle WebCenter Facebook Oracle User Groups Additionally, there are various Oracle WebCenter related blogs by an excellent grouping of services partners.

    Read the article

  • Oracle Identity Manager Role Management With API

    - by mustafakaya
    As an administrator, you use roles to create and manage the records of a collection of users to whom you want to permit access to common functionality, such as access rights, roles, or permissions. Roles can be independent of an organization, span multiple organizations, or contain users from a single organization. Using roles, you can: View the menu items that the users can access through Oracle Identity Manager Administration Web interface. Assign users to roles. Assign a role to a parent role Designate status to the users so that they can specify defined responses for process tasks. Modify permissions on data objects. Designate role administrators to perform actions on roles, such as enabling members of another role to assign users to the current role, revoke members from current role and so on. Designate provisioning policies for a role. These policies determine if a resource object is to be provisioned to or requested for a member of the role. Assign or remove membership rules to or from the role. These rules determine which users can be assigned/removed as direct membership to/from the role.  In this post, i will share some examples for role management with Oracle Identity Management API.  You can do role operations you can use Thor.API.Operations.tcGroupOperationsIntf interface. tcGroupOperationsIntf service =  getClient().getService(tcGroupOperationsIntf.class);     Assign an user to role :    public void assignRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.addMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     }  Revoke an user from role:     public void revokeRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.removeMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     } Get all members of a role :      public List<User> getRoleMembers(String roleName) throws Exception {         List<User> userList = new ArrayList<User>();         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);       String groupKey = role.getStringValue("Groups.Key");         tcResultSet members = service.getAllMemberUsers(Long.parseLong(groupKey));         for (int i = 0; i < members.getRowCount(); i++) {                 members.goToRow(i);                 long userKey = members.getLongValue("Users.Key");                 User member = oimUserManager.findUserByUserKey(String.valueOf(userKey));                 userList.add(member);         }        return userList;     } About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • Big GRC: Turning Data into Actionable GRC Intelligence

    - by Jenna Danko
    While it’s no longer headline news that Governments have carried out large scale data-mining programmes aimed at terrorism detection and identifying other patterns of interest across a wide range of digital data sources, the debate over the ethics and justification over this action, will clearly continue for some time to come. What is becoming clear is that these programmes are a framework for the collation and aggregation of massive amounts of unstructured data and from this, the creation of actionable intelligence from analyses that allowed the analysts to explore and extract a variety of patterns and then direct resources. This data included audio and video chats, phone calls, photographs, e-mails, documents, internet searches, social media posts and mobile phone logs and connections. Although Governance, Risk and Compliance (GRC) professionals are not looking at the implementation of such programmes, there are many similar GRC “Big data” challenges to be faced and potential lessons to be learned from these high profile government programmes that can be applied a lot closer to home. For example, how can GRC professionals collect, manage and analyze an enormous and disparate volume of data to create and manage their own actionable intelligence covering hidden signs and patterns of criminal activity, the early or retrospective, violation of regulations/laws/corporate policies and procedures, emerging risks and weakening controls etc. Not exactly the stuff of James Bond to be sure, but it is certainly more applicable to most GRC professional’s day to day challenges. So what is Big Data and how can it benefit the GRC process? Although it often varies, the definition of Big Data largely refers to the following types of data: Traditional Enterprise Data – includes customer information from CRM systems, transactional ERP data, web store transactions, and general ledger data. Machine-Generated /Sensor Data – includes Call Detail Records (“CDR”), weblogs and trading systems data. Social Data – includes customer feedback streams, micro-blogging sites like Twitter, and social media platforms like Facebook. The McKinsey Global Institute estimates that data volume is growing 40% per year, and will grow 44x between 2009 and 2020. But while it’s often the most visible parameter, volume of data is not the only characteristic that matters. In fact, according to sources such as Forrester there are four key characteristics that define big data: Volume. Machine-generated data is produced in much larger quantities than non-traditional data. This is all the data generated by IT systems that power the enterprise. This includes live data from packaged and custom applications – for example, app servers, Web servers, databases, networks, virtual machines, telecom equipment, and much more. Velocity. Social media data streams – while not as massive as machine-generated data – produce a large influx of opinions and relationships valuable to customer relationship management as well as offering early insight into potential reputational risk issues. Even at 140 characters per tweet, the high velocity (or frequency) of Twitter data ensures large volumes (over 8 TB per day) need to be managed. Variety. Traditional data formats tend to be relatively well defined by a data schema and change slowly. In contrast, non-traditional data formats exhibit a dizzying rate of change. Without question, all GRC professionals work in a dynamic environment and as new services, new products, new business lines are added or new marketing campaigns executed for example, new data types are needed to capture the resultant information.  Value. The economic value of data varies significantly. Typically, there is good information hidden amongst a larger body of non-traditional data that GRC professionals can use to add real value to the organisation; the greater challenge is identifying what is valuable and then transforming and extracting that data for analysis and action. For example, customer service calls and emails have millions of useful data points and have long been a source of information to GRC professionals. Those calls and emails are critical in helping GRC professionals better identify hidden patterns and implement new policies that can reduce the amount of customer complaints.   Now on a scale and depth far beyond those in place today, all that unstructured call and email data can be captured, stored and analyzed to reveal the reasons for the contact, perhaps with the aggregated customer results cross referenced against what is being said about the organization or a similar peer organization on social media. The organization can then take positive actions, communicating to the market in advance of issues reaching the press, strengthening controls, adjusting risk profiles, changing policy and procedures and completely minimizing, if not eliminating, complaints and compensation for that specific reason in the future. In this one example of many similar ones, the GRC team(s) has demonstrated real and tangible business value. Big Challenges - Big Opportunities As pointed out by recent Forrester research, high performing companies (those that are growing 15% or more year-on-year compared to their peers) are taking a selective approach to investing in Big Data.  "Tomorrow's winners understand this, and they are making selective investments aimed at specific opportunities with tangible benefits where big data offers a more economical solution to meet a need." (Forrsights Strategy Spotlight: Business Intelligence and Big Data, Q4 2012) As pointed out earlier, with the ever increasing volume of regulatory demands and fines for getting it wrong, limited resource availability and out of date or inadequate GRC systems all contributing to a higher cost of compliance and/or higher risk profile than desired – a big data investment in GRC clearly falls into this category. However, to make the most of big data organizations must evolve both their business and IT procedures, processes, people and infrastructures to handle these new high-volume, high-velocity, high-variety sources of data and be able integrate them with the pre-existing company data to be analyzed. GRC big data clearly allows the organization access to and management over a huge amount of often very sensitive information that although can help create a more risk intelligent organization, also presents numerous data governance challenges, including regulatory compliance and information security. In addition to client and regulatory demands over better information security and data protection the sheer amount of information organizations deal with the need to quickly access, classify, protect and manage that information can quickly become a key issue  from a legal, as well as technical or operational standpoint. However, by making information governance processes a bigger part of everyday operations, organizations can make sure data remains readily available and protected. The Right GRC & Big Data Partnership Becomes Key  The "getting it right first time" mantra used in so many companies remains essential for any GRC team that is sponsoring, helping kick start, or even overseeing a big data project. To make a big data GRC initiative work and get the desired value, partnerships with companies, who have a long history of success in delivering successful GRC solutions as well as being at the very forefront of technology innovation, becomes key. Clearly solutions can be built in-house more cheaply than through vendor, but as has been proven time and time again, when it comes to self built solutions covering AML and Fraud for example, few have able to scale or adapt appropriately to meet the changing regulations or challenges that the GRC teams face on a daily basis. This has led to the creation of GRC silo’s that are causing so many headaches today. The solutions that stand out and should be explored are the ones that can seamlessly merge the traditional world of well-known data, analytics and visualization with the new world of seemingly innumerable data sources, utilizing Big Data technologies to generate new GRC insights right across the enterprise.Ultimately, Big Data is here to stay, and organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be the ones that are well positioned to make the most of it. A Blueprint and Roadmap Service for Big Data Big data adoption is first and foremost a business decision. As such it is essential that your partner can align your strategies, goals, and objectives with an architecture vision and roadmap to accelerate adoption of big data for your environment, as well as establish practical, effective governance that will maintain a well managed environment going forward. Key Activities: While your initiatives will clearly vary, there are some generic starting points the team and organization will need to complete: Clearly define your drivers, strategies, goals, objectives and requirements as it relates to big data Conduct a big data readiness and Information Architecture maturity assessment Develop future state big data architecture, including views across all relevant architecture domains; business, applications, information, and technology Provide initial guidance on big data candidate selection for migrations or implementation Develop a strategic roadmap and implementation plan that reflects a prioritization of initiatives based on business impact and technology dependency, and an incremental integration approach for evolving your current state to the target future state in a manner that represents the least amount of risk and impact of change on the business Provide recommendations for practical, effective Data Governance, Data Quality Management, and Information Lifecycle Management to maintain a well-managed environment Conduct an executive workshop with recommendations and next steps There is little debate that managing risk and data are the two biggest obstacles encountered by financial institutions.  Big data is here to stay and risk management certainly is not going anywhere, and ultimately financial services industry organizations that embrace its potential and outline a viable strategy, as well as understand and build a solid analytical foundation, will be best positioned to make the most of it. Matthew Long is a Financial Crime Specialist for Oracle Financial Services. He can be reached at matthew.long AT oracle.com.

    Read the article

  • How to Identify Which Hardware Component is Failing in Your Computer

    - by Chris Hoffman
    Concluding that your computer has a hardware problem is just the first step. If you’re dealing with a hardware issue and not a software issue, the next step is determining what hardware problem you’re actually dealing with. If you purchased a laptop or pre-built desktop PC and it’s still under warranty, you don’t need to care about this. Have the manufacturer fix the PC for you — figuring it out is their problem. If you’ve built your own PC or you want to fix a computer that’s out of warranty, this is something you’ll need to do on your own. Blue Screen 101: Search for the Error Message This may seem like obvious advice, but searching for information about a blue screen’s error message can help immensely. Most blue screens of death you’ll encounter on modern versions of Windows will likely be caused by hardware failures. The blue screen of death often displays information about the driver that crashed or the type of error it encountered. For example, let’s say you encounter a blue screen that identified “NV4_disp.dll” as the driver that caused the blue screen. A quick Google search will reveal that this is the driver for NVIDIA graphics cards, so you now have somewhere to start. It’s possible that your graphics card is failing if you encounter such an error message. Check Hard Drive SMART Status Hard drives have a built in S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) feature. The idea is that the hard drive monitors itself and will notice if it starts to fail, providing you with some advance notice before the drive fails completely. This isn’t perfect, so your hard drive may fail even if SMART says everything is okay. If you see any sort of “SMART error” message, your hard drive is failing. You can use SMART analysis tools to view the SMART health status information your hard drives are reporting. Test Your RAM RAM failure can result in a variety of problems. If the computer writes data to RAM and the RAM returns different data because it’s malfunctioning, you may see application crashes, blue screens, and file system corruption. To test your memory and see if it’s working properly, use Windows’ built-in Memory Diagnostic tool. The Memory Diagnostic tool will write data to every sector of your RAM and read it back afterwards, ensuring that all your RAM is working properly. Check Heat Levels How hot is is inside your computer? Overheating can rsult in blue screens, crashes, and abrupt shut downs. Your computer may be overheating because you’re in a very hot location, it’s ventilated poorly, a fan has stopped inside your computer, or it’s full of dust. Your computer monitors its own internal temperatures and you can access this information. It’s generally available in your computer’s BIOS, but you can also view it with system information utilities such as SpeedFan or Speccy. Check your computer’s recommended temperature level and ensure it’s within the appropriate range. If your computer is overheating, you may see problems only when you’re doing something demanding, such as playing a game that stresses your CPU and graphics card. Be sure to keep an eye on how hot your computer gets when it performs these demanding tasks, not only when it’s idle. Stress Test Your CPU You can use a utility like Prime95 to stress test your CPU. Such a utility will fore your computer’s CPU to perform calculations without allowing it to rest, working it hard and generating heat. If your CPU is becoming too hot, you’ll start to see errors or system crashes. Overclockers use Prime95 to stress test their overclock settings — if Prime95 experiences errors, they throttle back on their overclocks to ensure the CPU runs cooler and more stable. It’s a good way to check if your CPU is stable under load. Stress Test Your Graphics Card Your graphics card can also be stress tested. For example, if your graphics driver crashes while playing games, the games themselves crash, or you see odd graphical corruption, you can run a graphics benchmark utility like 3DMark. The benchmark will stress your graphics card and, if it’s overheating or failing under load, you’ll see graphical problems, crashes, or blue screens while running the benchmark. If the benchmark seems to work fine but you have issues playing a certain game, it may just be a problem with that game. Swap it Out Not every hardware problem is easy to diagnose. If you have a bad motherboard or power supply, their problems may only manifest through occasional odd issues with other components. It’s hard to tell if these components are causing problems unless you replace them completely. Ultimately, the best way to determine whether a component is faulty is to swap it out. For example, if you think your graphics card may be causing your computer to blue screen, pull the graphics card out of your computer and swap in a new graphics card. If everything is working well, it’s likely that your previous graphics card was bad. This isn’t easy for people who don’t have boxes of components sitting around, but it’s the ideal way to troubleshoot. Troubleshooting is all about trial and error, and swapping components out allows you to pin down which component is actually causing the problem through a process of elimination. This isn’t a complete guide to everything that could likely go wrong and how to identify it — someone could write a full textbook on identifying failing components and still not cover everything. But the tips above should give you some places to start dealing with the more common problems. Image Credit: Justin Marty on Flickr     

    Read the article

  • Not attending the LUGM mini-meetup - 05. Oct 2013

    Not attending a meeting of the LUGM can be fun, too. It's getting a bit of a habit that Ish is organising small gatherings, aka mini-meetups, of the Linux User Group Mauritius/Meta (LUGM) almost every Saturday. There they mainly discuss and talk about various elements of using Linux as ones main operating systems and the possibilities you are going to have. On top of course, some tips & tricks about mastering the command line and initial steps in scripting or even writing HTML. In general, sounds like a good portion of fun and great spirit of community. Unfortunately, I'm usually quite busy with private and family matters during the weekend and so I already signalised that I wouldn't be around. Well, at least not physically... But this Saturday a couple of things worked out faster than expected and so I was hanging out on my machine. I made virtual contact with one of Pawan's messages over on Facebook... And somehow that kicked off some kind of an online game fun on basic configuration of Apache HTTPd 2.2.x, PHP 5.x and how to improve the overall performance of a newly installed blog based on WordPress. Default configuration files Nitin's website finally came alive and despite the dark theme and the hidden Apple 'fanboy' advertisement I was more interested in the technical situation. As with any new installation there is usually quite some adjustment to be done. And Nitin's page was no exception. Unfortunately, out of the box installations of Apache httpd and PHP are too verbose and expose too much information under the hood. You might think that this isn't really a problem at all, well, think about it again after completely reading this article. First, I checked the HTTP response headers - using either Chrome Developer Tools or Firefox Web Developer extension - of Nitin's page and based on that I advised him to lower the noise levels a little bit. It's not really necessary that detailed information about web server software and scripting language has to be published in every response made. Quite a number of script kiddies and exploits actually check for version specifics prior to an attack. So, removing at least version details hardens the system a little bit. In particular, I'm talking about these response values: Server X-Powered-By How to achieve that? By tweaking the configuration files... Namely, we are going to look into the following ones: apache2.conf httpd.conf .htaccess php.ini The above list contains some additional files, I'm talking about in the next paragraphs. Anyway, those are the ones involved. Tweaking Apache Open your favourite text editor and start to modify the apache2.conf. Eventually, you might like to have a quick peak at the file to see whether it is necessary to adjust it or not. Following is a handy combination of commands to get an overview of your active directives: # sudo grep -v '#' /etc/apache2/apache2.conf | grep -v '^$' | less There you keep an eye on those two Apache directives: ServerSignature Off ServerTokens Prod If that's not the case, change them as highlighted above. In order to activate your modifications you have to restart Apache httpd server. On Debian and Ubuntu you might use apache2ctl for that, on other distributions you might have to use service or run the init-scripts again: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Refresh your website and check the HTTP response header. Tweaking PHP5 (a little bit) Next, check your php.ini file with the following statement: # sudo grep -v ';' /etc/php5/apache2/php.ini | grep -v '^$' | less And check the value of expose_php = Off Again, if it's not as highlighted, change it... Some more Apache love Okay, back to Apache it might also be interesting to improve the situation about browser caching and removing more obsolete information. When you run your website against the usual performance checks like Google Page Speed and Yahoo YSlow you might see those check points with bad grades on a standard, default configuration. Well, this can be done easily. Configure entity tags (ETags) ETags are only interesting when you run your websites on a farm of multiple web servers. Removing this data for your static resources is very simple in Apache. As we are going to deal with the HTTP response header information you have to ensure that Apache is capable to manipulate them. First, check your enabled modules: # sudo ls -al /etc/apache2/mods-enabled/ | grep headers And in case that the 'headers' module is not listed, you have to enable it from the available ones: # sudo a2enmod headers Second, check your httpd.conf file (in case it exists): # sudo grep -v '#' /etc/apache2/httpd.conf | grep -v '^$' | less In newer (better said fresh) installations you might have to create a new configuration file below your conf.d folder with your favourite text editor like so: # sudo nano /etc/apache2/conf.d/headers.conf Then, in order to tweak your HTTP responses either check for those lines or add them: Header unset ETagFileETag None In case that your file doesn't exist or those lines are missing, feel free to create/add them. Afterwards, check your Apache configuration syntax and restart your running instances as already shown above: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Add Expires headers To improve the loading performance of your website, you should take some care into the proper configuration of how to leverage the browser's ability to cache certain resources and files. This is done by adding an Expires: value to the HTTP response header. Generally speaking it is advised that you specify a near-future, read: 1 week or a little bit more, for your static content like JavaScript files or Cascading Style Sheets. One solution to adjust this is to put some instructions into the .htaccess file in the root folder of your web site. Of course, this could also be placed into a more generic location of your Apache installation but honestly, I'd like to keep this at the web site level. Following some adjustments I'm currently using on this blog site: # Turn on Expires and set default to 0ExpiresActive OnExpiresDefault A0 # Set up caching on media files for 1 year (forever?)<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">ExpiresDefault A29030400Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 1 week<FilesMatch "\.(js|css)$">ExpiresDefault A604800Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 31 days<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">ExpiresDefault A2678400Header append Cache-Control "public"</FilesMatch> As we are editing the .htaccess files, it is not necessary to restart Apache. In case that your web site doesn't load anymore or you're experiencing an error while trying to restart your httpd, check that the 'expires' module is actually an enabled module: # ls -al /etc/apache2/mods-enabled/ | grep expires# sudo a2enmod expires Of course, the instructions above a re not feature complete but I hope that they might provide a better default configuration for your LAMP stack. Resume of the day Within a couple of hours, and while being occupied with an eLearning course on SQL Server 2012, I had some good fun in helping and assisting other LUGM members while they were some kilometers away at Bagatelle. According to other blog articles it seems that Nitin had quite some moments of desperation. Just for the records: At no time it was my intention to either kick his butt or pull a leg on him. Simply, providing some input based on the lessons I've learned over the last couple of years configuring Apache HTTPd and PHP. Check out the other blogs, too: LUGM mini-meetup... Epic! Superb Saturday Linux Meetup And last but not least, the man himself: The end of a new beginning Cheers, and happy community'ing! Updates Due to our weekly Code & Coffee sessions in the MSCC community, I had a chance to talk to Nitin directly and he showed me the problems directly on his machine. This led to update this article hence the paragraphs on enabling the modules 'headers' and 'expires'.

    Read the article

  • Following my passion

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} What makes you go the extra mile? What makes you move forward and be ambitious? My name is Alin Gheorghe and I am currently working as a Contracts Administrator in the Shared Service Centre in Bucharest, Romania. I have graduated from the Political Science Faculty of the National School of Political and Administrative Studies here in Bucharest and I am currently undergoing a Master Program on Security and Diplomacy at the same university. Although I have been working a full time job here at Oracle since January 2011 and also going to school after work, I am going to tell you how I spend my spare time and about my passion. I always thought that if one doesn’t have something that he would consider a passion it’s always just a matter of time until he would discover one. Looking back, I can tell you that I discovered mine when I was 14 years old and I remember watching a football game when suddenly I became fascinated by the “man in black” that all football players obeyed during the match. That year I attended and promoted a referee course within my local referee committee and about 6 months later I was delegated to my first official game at youth tournament. Almost 10 years have passed since then and I can tell you that I very much love and appreciate this activity that I have spent doing, each and every weekend, 9 months every year, acquiring more than 600 official games until now. And even if not having a real free weekend or holiday might be sound very consuming, I can say that having something I am passionate about helps me to keep myself balanced and happy while giving me an option to channel any stress or anxiety I may feel. I think it’s important to have something of your own besides work that you spend time and effort on. Whether it’s painting, writing or a sport, having a passion can only have a positive effect on your life. And as every extra thing, it’s not always easy to follow your passion, but is it worth it? Speaking from my own experience I am sure it is, and here are some tips and tricks I constantly use not to give up on my passion: Normal 0 false false false EN-US X-NONE X-NONE -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} No matter how much time you spend at work and how much credit you get for that, it will always be the passion related achievements that will comfort you more and boost your self esteem and nothing compares to that feeling you get. I always try to keep this in mind so that each time I think about giving up I get even more ambitious to move forward. Everybody can just do what they are paid to do or what they are requested to do at work but not everybody can go that extra mile when it comes to following their passion and putting in extra work for that. By exercising this constantly you get used to also applying this attitude on the work related tasks. It takes accurate planning, anticipation and forecasting in order to combine your work with your passion. Therefore having a full schedule and keeping up with it will only help develop and exercise such skills and also will prove to you that you are up to such a challenge. I always keep in mind as a final goal that if you get very good at your passion you can actually start earning from it. And I think that is the ultimate level when you can say that you make a living by doing exactly what you are passionate about. In conclusion, by taking the easy way not only do you miss out on something nice, but life’s priceless rewards are usually given by those things that you actually believe in and know how to stand up for over time.

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • Bouncing off a circular Boundary with multiple balls?

    - by Anarkie
    I am making a game like this : Yellow Smiley has to escape from red smileys, when yellow smiley hits the boundary game is over, when red smileys hit the boundary they should bounce back with the same angle they came, like shown below: Every 10 seconds a new red smiley comes in the big circle, when red smiley hits yellow, game is over, speed and starting angle of red smileys should be random. I control the yellow smiley with arrow keys. The biggest problem I have reflecting the red smileys from the boundary with the angle they came. I don't know how I can give a starting angle to a red smiley and bouncing it with the angle it came. I would be glad for any tips! My js source code : var canvas = document.getElementById("mycanvas"); var ctx = canvas.getContext("2d"); // Object containing some global Smiley properties. var SmileyApp = { radius: 15, xspeed: 0, yspeed: 0, xpos:200, // x-position of smiley ypos: 200 // y-position of smiley }; var SmileyRed = { radius: 15, xspeed: 0, yspeed: 0, xpos:350, // x-position of smiley ypos: 65 // y-position of smiley }; var SmileyReds = new Array(); for (var i=0; i<5; i++){ SmileyReds[i] = { radius: 15, xspeed: 0, yspeed: 0, xpos:350, // x-position of smiley ypos: 67 // y-position of smiley }; SmileyReds[i].xspeed = Math.floor((Math.random()*50)+1); SmileyReds[i].yspeed = Math.floor((Math.random()*50)+1); } function drawBigCircle() { var centerX = canvas.width / 2; var centerY = canvas.height / 2; var radiusBig = 300; ctx.beginPath(); ctx.arc(centerX, centerY, radiusBig, 0, 2 * Math.PI, false); // context.fillStyle = 'green'; // context.fill(); ctx.lineWidth = 5; // context.strokeStyle = '#003300'; // green ctx.stroke(); } function lineDistance( positionx, positiony ) { var xs = 0; var ys = 0; xs = positionx - 350; xs = xs * xs; ys = positiony - 350; ys = ys * ys; return Math.sqrt( xs + ys ); } function drawSmiley(x,y,r) { // outer border ctx.lineWidth = 3; ctx.beginPath(); ctx.arc(x,y,r, 0, 2*Math.PI); //red ctx.fillStyle="rgba(255,0,0, 0.5)"; ctx.fillStyle="rgba(255,255,0, 0.5)"; ctx.fill(); ctx.stroke(); // mouth ctx.beginPath(); ctx.moveTo(x+0.7*r, y); ctx.arc(x,y,0.7*r, 0, Math.PI, false); // eyes var reye = r/10; var f = 0.4; ctx.moveTo(x+f*r, y-f*r); ctx.arc(x+f*r-reye, y-f*r, reye, 0, 2*Math.PI); ctx.moveTo(x-f*r, y-f*r); ctx.arc(x-f*r+reye, y-f*r, reye, -Math.PI, Math.PI); // nose ctx.moveTo(x,y); ctx.lineTo(x, y-r/2); ctx.lineWidth = 1; ctx.stroke(); } function drawSmileyRed(x,y,r) { // outer border ctx.lineWidth = 3; ctx.beginPath(); ctx.arc(x,y,r, 0, 2*Math.PI); //red ctx.fillStyle="rgba(255,0,0, 0.5)"; //yellow ctx.fillStyle="rgba(255,255,0, 0.5)"; ctx.fill(); ctx.stroke(); // mouth ctx.beginPath(); ctx.moveTo(x+0.4*r, y+10); ctx.arc(x,y+10,0.4*r, 0, Math.PI, true); // eyes var reye = r/10; var f = 0.4; ctx.moveTo(x+f*r, y-f*r); ctx.arc(x+f*r-reye, y-f*r, reye, 0, 2*Math.PI); ctx.moveTo(x-f*r, y-f*r); ctx.arc(x-f*r+reye, y-f*r, reye, -Math.PI, Math.PI); // nose ctx.moveTo(x,y); ctx.lineTo(x, y-r/2); ctx.lineWidth = 1; ctx.stroke(); } // --- Animation of smiley moving with constant speed and bounce back at edges of canvas --- var tprev = 0; // this is used to calculate the time step between two successive calls of run function run(t) { requestAnimationFrame(run); if (t === undefined) { t=0; } var h = t - tprev; // time step tprev = t; SmileyApp.xpos += SmileyApp.xspeed * h/1000; // update position according to constant speed SmileyApp.ypos += SmileyApp.yspeed * h/1000; // update position according to constant speed for (var i=0; i<SmileyReds.length; i++){ SmileyReds[i].xpos += SmileyReds[i].xspeed * h/1000; // update position according to constant speed SmileyReds[i].ypos += SmileyReds[i].yspeed * h/1000; // update position according to constant speed } // change speed direction if smiley hits canvas edges if (lineDistance(SmileyApp.xpos, SmileyApp.ypos) + SmileyApp.radius > 300) { alert("Game Over"); } // redraw smiley at new position ctx.clearRect(0,0,canvas.height, canvas.width); drawBigCircle(); drawSmiley(SmileyApp.xpos, SmileyApp.ypos, SmileyApp.radius); for (var i=0; i<SmileyReds.length; i++){ drawSmileyRed(SmileyReds[i].xpos, SmileyReds[i].ypos, SmileyReds[i].radius); } } // uncomment these two lines to get every going // SmileyApp.speed = 100; run(); // --- Control smiley motion with left/right arrow keys function arrowkeyCB(event) { event.preventDefault(); if (event.keyCode === 37) { // left arrow SmileyApp.xspeed = -100; SmileyApp.yspeed = 0; } else if (event.keyCode === 39) { // right arrow SmileyApp.xspeed = 100; SmileyApp.yspeed = 0; } else if (event.keyCode === 38) { // up arrow SmileyApp.yspeed = -100; SmileyApp.xspeed = 0; } else if (event.keyCode === 40) { // right arrow SmileyApp.yspeed = 100; SmileyApp.xspeed = 0; } } document.addEventListener('keydown', arrowkeyCB, true); JSFiddle : http://jsfiddle.net/gj4Q7/

    Read the article

  • Silverlight ItemsControl vertical scrollbar, using a wrappanel as ControlTemplate

    - by Orestes C.A.
    I have a collection of elements, each one with a name and a subcollection of image blobs. I want to display an Accordion, with each item representing each of the MainElements. inside each element, I display the images in the subcollecion of said MainElement. The Accordion gets resized by the user, so I use a wrappanel for presenting the images. When the accordion is wide enough, the images reorder themselves fitting as many as posible in each row. the problem comes when the wrappanel only displays one image per row (because there's no space enough for more), the image list continues, but I can't see all the images, because they don't fit inside the control's height. I need a vertical scrollbar to be displayed inside the AccordionItem so I can scroll down the image list. So, here's my code: <layoutToolkit:Accordion Width="Auto" Height="Auto" ItemsSource="{Binding MainElementCollection}"> <layoutToolkit:Accordion.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding MainElementName}" /> </DataTemplate> </layoutToolkit:Accordion.ItemTemplate> <layoutToolkit:Accordion.ContentTemplate> <DataTemplate> <ItemsControl ItemsSource="{Binding SubElementCollection}" ScrollViewer.VerticalScrollBarVisibility="Auto" > <ItemsControl.Template> <ControlTemplate> <controlsToolkit:WrapPanel /> </ControlTemplate> </ItemsControl.Template> <ItemsControl.ItemTemplate> <DataTemplate> <Grid> <Image Margin="2" Width="150" Source="{Binding PreviewImage, Converter={StaticResource ImageConverter}}" /> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </DataTemplate> </layoutToolkit:Accordion.ContentTemplate> </layoutToolkit:Accordion> http://www.silverlightshow.net/tips/How-to-add-scrollbars-to-ItemsControl.aspx suggests that I should surround my wrappanel with a scrollviewer, like this <ItemsControl.Template> <ControlTemplate> <scrollviewer> <controlsToolkit:WrapPanel /> </scrollviewer> </ControlTemplate> </ItemsControl.Template> But then my wrappanel gets really small and I can only see a small vertical scrollbar Any ideas? Thanks a lot. Edit: I figured thatthe wrappanel loses its width when used in the controltemplate It should be used as follows: <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <controlsToolkit:WrapPanel ScrollViewer.VerticalScrollBarVisibility="Visible" /> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> Anyway, I tried adding the ScrollViewer.VerticalScrollBarVisibility="Visible" line but I'm stuck again.

    Read the article

  • WCF net.tcp windows service - call duration and calls outstanding increases over time

    - by Brook
    I have a windows service which uses the ServiceHost class to host a WCF Service using the net.tcp binding. I have done some tweaking to the config to throttle sessions as well as number of connections, but it seems that every once in a while my "Calls outstanding" and "Call duration" shoot up and stay up in perfmon. It seems to me I have a leak somewhere, but the code I have is all fairly minimal, I'm relying on ServiceHost to handle the details. Here's how I start my service ServiceHost host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); My Faulted event just does the following (more or less, logging etc removed) if (host.State == CommunicationState.Faulted) { host.Abort(); } else { host.Close(); } host = new ServiceHost(type); host.Faulted+=new EventHandler(Faulted); host.Open(); Here's some snippets from my app.config to show some of the things I've tried <runtime> <gcConcurrent enabled="true" /> <generatePublisherEvidence enabled="false" /> </runtime> ......... <behaviors> <serviceBehaviors> <behavior name="Throttled"> <serviceThrottling maxConcurrentCalls="300" maxConcurrentSessions="300" maxConcurrentInstances="300" /> .......... <services> <service name="MyService" behaviorConfiguration="Throttled"> <endpoint address="net.tcp://localhost:49001/MyService" binding="netTcpBinding" bindingConfiguration="Tcp" contract="IMyService"> </endpoint> </service> </services> .......... <netTcpBinding> <binding name="Tcp" openTimeout="00:00:10" closeTimeout="00:00:10" portSharingEnabled="true" receiveTimeout="00:5:00" sendTimeout="00:5:00" hostNameComparisonMode="WeakWildcard" listenBacklog="1000" maxConnections="1000"> <reliableSession enabled="false"/> <security mode="None"/> </binding> </netTcpBinding> .......... <!--for my diagnostics--> <diagnostics performanceCounters="ServiceOnly" wmiProviderEnabled="true" /> There's obviously some resource getting tied up, but I thought I covered everything with my config. I'm only getting about ~150 clients so I don't think I'm coming up against my "300" limit. "Calls per second" stays constant at anywhere from 2-5 calls per second. The service will run for hours and hours with 0-2 "calls outstanding" and very low "call duration" and then eventually it will shoot up to 30 calls oustanding and 20s call duration. Any tips on what might be causing my "calls outstanding" and "call duration" to spike? Where am I leaking? Point me in the right direction?

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • Java Generics, JPA 2, J2EE, JSF 2, GWT, Ajax, Oracle's Java Strategies, Flex, iPhone, Agile ALM, Gra

    - by Kim Won
    Great Indian Developer Summit 2010 – India's Biggest Polyglot Conference and Workshops for IT Software Professionals Bangalore, April 9, 2010: The GIDS.Java Conference and Workshops has announced the complete program of over 50 sessions on the present and future of the Java language and VM, how they are evolving to meet the community's ever-changing needs, and some of the cutting-edge tools, technologies & techniques used for building robust enterprise Java applications today. The GIDs.Java track at Great Indian Developer Summit takes place 22 and 23 April 2010, at the Indian Institute of Science in Bangalore. As one of the longest running independent developer conferences in India, GIDS.Java at the Great Indian Developer Summit 2010 is uniquely positioned to provide a blend of practical, pragmatic and immediately applicable knowledge and a glimpse of the future of technology. During 22 and 23 April 2010, GIDS.Java offers a multi-track conference, workshops, expo show floor, and networking opportunities. The first keynote at GIDS.Java "Pointy Haired Bosses and Pragmatic Programmers" is led by Dr. Venkat Subramaniam. He speaks about how each of us has a professional responsibility to be objective and make decisions that will help us and our teams be productive and deliver results. Venkat will pick on some fallacies, lay down facts, and discuss how to stay professional and objective in our daily efforts. The second keynote of the day explains the practical features that make the Cloud so interesting, and why everyone should start using it in their everyday life. Simone Brunozzi, Amazon Web Services Technology Evangelist, will detail technical examples, business details all mixed with a lot of Italian humor to ensure audience enjoy this talk without a single line of code. The third keynote of the day gives an exciting overview of directions in the Java space for Oracle, featuring concrete signs of Oracles heavy investment, a clear concise strategy overview, and deep dives into some of the most interesting pieces of technology being developed in the Java Platform Group today; such as JavaEE, JDK7, JavaFX, and our exciting new visual tools. Featuring demos by a Java evangelism team star, Simon Ritter, this talk takes you top to bottom in Java Technology. Featured talks at GID.Web include: Good, Bad, and Ugly of Java Generics, Venkat Subramaniam Pure Java Ajax: An Overview of GWT 2.0, Marty Hall How JPA 2.0 Makes a Good Thing Even Better, Mike Keith Building Enterprise RIAs with Adobe Flex and Java, Sujit Reddy G Integrated Ajax Support in JSF 2.0, Marty Hall Design Patterns in Java and Groovy, Venkat Subramaniam A Gentle Introduction to iPhone and Obj-C for Java Developers, Matthew McCullough Cloud Computing: Azure for Java Developers, Janakiram MSV Ajax Support in the Prototype JavaScript Library, Marty Hall First steps to IT Heaven Through the Cloud. Part III: .Java, Simone Brunozi Building Web 2.0 User Interfaces for Web Service Models using JSF, Frank Nimphius and Jobinesh P Acceptance Test Driven Development, John Tobin and Mohammed Mohsinali Architecting Your Java Applications for the Cloud, Praveen Srivatsa Effective Java, Venkat Subramaniam The Amazing Groovy Weight-loss Plan, Scott Davis Enterprise Modeling - from Conceptual Planning to Technical Blueprints, J Sripad Java Collections Renaissance, Donald Raab and Vlad Zakharov Power 7 and IBM J9VM, Himanshu Goyal A Whistle-stop Tour of Maven 3.0, Matthew McCullough Mass Volume Opportunities for Java Developers, Jouko Nuottila Emerging Technology Complex Event Processing, Duvvuri Srinivas Agile ALM for Distributed Development, Karthi Swaminathan Dim Sum Grails - A Sampler of Practical Non Database-Driven Grails Applications, Scott Davis Diagnosing Performance Bottlenecks in J2EE, Deepak Kaul Business Driven Identity Management, Suneet Agera Combining Java EE with OSGi using Eclipse Gemini, Mike Keith Workshop: Essence of Functional Programming, Venkat Subramaniam Workshop: Agile Development, Tools, and Teams and Scrum Certification, Stephen Forte Workshop: Cloud Computing Boot Camp on the Google App Engine, Matthew McCullough Workshop: Building Your First Amazon App, Simone Brunozzi Workshop: The 180-min AJAX and JSON Spike Class, Scott Davis Workshop: PHP + Adobe Flex = Killer RIA, Shyamprasad P Workshop: User Expereince Evaluation Model Walkthrough, Sanna Häiväläinen Workshop: Building Data Centric Applications using Adobe Flex and Java, Prashant Singh Workshop: Monetizing your Apps with PayPal X Payments Platform, Khurram Khan, Praveen Alavilli Sponsors of Great Indian Developer Summit 2010 include: Platinum sponsors Microsoft, Oracle Forum Nokia and Adobe; Gold sponsors Intel and SAP; Silver sponsors Quest Software, PayPal, Telerik and AMT. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • Problem Using POI To Set CellStyleProperty With HSSFCellUtil

    - by Alvin Sim
    I have a Java class which uses Apache POI to generate reports in Excel. When I run the Java class from my IDE or command prompt, I only see warning messages from LOG4J as below: log4j:WARN No appenders could be found for logger (org.apache.commons.beanutils.converters.BooleanConverter). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Despite the warning messages, the report was generated successfully. But when I run it from my web app, which uses JSP and submits the form to a Servlet which calls the Java class, the Java class seems to have problems setting the style properties to the cell. Below are the Java code and also the stack trace. I'm testing this on a Standalone OC4J and the IDE which I'm using is Oracle's JDeveloper. And the Java JDK is 1.4.2. I've been looking high and low the whole day yesterday but can't seem to find out why. Code: region = new Region(1, (short) 1, 5, (short)2); sheet.addMergedRegion(region); HSSFRegionUtil.setBorderBottom( (short) 1, region, sheet, workBook ); Stack trace: 10/06/07 16:03:17 SvltRptProcessor ACTION=print_to_file RPT_CLASSNAME=com.reports.BP.DailySalesBudgetExcelRpt DES_TYPE=file DES_FORMAT=xls 10/06/07 16:03:17 rptFilename=/oracle/reports//20100607_160317_BP_DailySalesBudgetByPmgrp_OPR.xls 10/06/07 16:03:17 ReportRunner printToFile execute -> com.reports.BP.DailySalesBudgetExcelRpt 10/06/07 16:03:17 enter daily sales budget excel rpt -----> print() 10/06/07 16:03:18 Tutalii: C:\oc4j10gmy\j2ee\home\applib\poi-2.5.1.jar archive 10/06/07 16:03:19 org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:509) 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:285) 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:255) 10/06/07 16:03:19 at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:381) 10/06/07 16:03:19 at org.apache.commons.beanutils.ConvertUtilsBean.<init>(ConvertUtilsBean.java:157) 10/06/07 16:03:19 at org.apache.commons.beanutils.BeanUtilsBean.<init>(BeanUtilsBean.java:117) 10/06/07 16:03:19 at org.apache.commons.beanutils.BeanUtilsBean$1.initialValue(BeanUtilsBean.java:68) 10/06/07 16:03:19 at org.apache.commons.beanutils.ContextClassLoaderLocal.get(ContextClassLoaderLocal.java:153) 10/06/07 16:03:19 at org.apache.commons.beanutils.BeanUtilsBean.getInstance(BeanUtilsBean.java:80) 10/06/07 16:03:19 at org.apache.commons.beanutils.PropertyUtilsBean.getInstance(PropertyUtilsBean.java:114) 10/06/07 16:03:19 at org.apache.commons.beanutils.PropertyUtils.describe(PropertyUtils.java:209) 10/06/07 16:03:19 at org.apache.poi.hssf.usermodel.contrib.HSSFCellUtil.setCellStyleProperty(HSSFCellUtil.java:174) 10/06/07 16:03:19 at org.apache.poi.hssf.usermodel.contrib.HSSFRegionUtil. setBorderBottom(HSSFRegionUtil.java:153) 10/06/07 16:03:19 at com.reports.BP.DailySalesBudgetExcelRpt.setRegion(DailySalesBudgetExcelRpt.java:773) 10/06/07 16:03:19 at com.reports.BP.DailySalesBudgetExcelRpt.createHdr(DailySalesBudgetExcelRpt.java:308) 10/06/07 16:03:19 at com.reports.BP.DailySalesBudgetExcelRpt.start(DailySalesBudgetExcelRpt.java:272) 10/06/07 16:03:19 at com.reports.BP.DailySalesBudgetExcelRpt.print(DailySalesBudgetExcelRpt.java:222) 10/06/07 16:03:19 at com.servlet.RPT.ReportRunner.printToFile(ReportRunner.java:601) 10/06/07 16:03:19 at com.servlet.RPT.ReportRunner.doPrint(ReportRunner.java:302) 10/06/07 16:03:19 at com.servlet.RPT.ReportRunner.run(ReportRunner.java:270) 10/06/07 16:03:19 at java.lang.Thread.run(Thread.java:619) 10/06/07 16:03:19 Caused by: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:420) 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:502) 10/06/07 16:03:19 ... 20 more 10/06/07 16:03:19 Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Category 10/06/07 16:03:19 at java.lang.Class.getDeclaredConstructors0(Native Method) 10/06/07 16:03:19 at java.lang.Class.privateGetDeclaredConstructors(Class. java:2389) 10/06/07 16:03:19 at java.lang.Class.getConstructor0(Class.java:2699) 10/06/07 16:03:19 at java.lang.Class.getConstructor(Class.java:1657) 10/06/07 16:03:19 at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:417) 10/06/07 16:03:19 ... 21 more 10/06/07 16:03:19 Caused by: java.lang.ClassNotFoundException: org.apache.log4j. Category 10/06/07 16:03:19 at java.net.URLClassLoader$1.run(URLClassLoader.java:202 ) 10/06/07 16:03:19 at java.security.AccessController.doPrivileged(Native Method) 10/06/07 16:03:19 at java.net.URLClassLoader.findClass(URLClassLoader.java :190) 10/06/07 16:03:19 at java.lang.ClassLoader.loadClass(ClassLoader.java:307) 10/06/07 16:03:19 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) 10/06/07 16:03:19 at java.lang.ClassLoader.loadClass(ClassLoader.java:248) 10/06/07 16:03:19 ... 26 more org.apache.commons.lang.exception.NestableException: Couldn't setCellStyleProperty. at org.apache.poi.hssf.usermodel.contrib.HSSFCellUtil.setCellStyleProperty(HSSFCellUtil.java:209) at org.apache.poi.hssf.usermodel.contrib.HSSFRegionUtil.setBorderBottom(HSSFRegionUtil.java:153) at com.reports.BP.DailySalesBudgetExcelRpt.setRegion(DailySalesBudgetExcelRpt.java:773) at com.reports.BP.DailySalesBudgetExcelRpt.createHdr(DailySalesBudgetExcelRpt.java:308) at com.reports.BP.DailySalesBudgetExcelRpt.start(DailySalesBudgetExcelRpt.java:272) at com.reports.BP.DailySalesBudgetExcelRpt.print(DailySalesBudgetExcelRpt.java:222) at com.servlet.RPT.ReportRunner.printToFile(ReportRunner.java:601) at com.servlet.RPT.ReportRunner.doPrint(ReportRunner.java:302) at com.servlet.RPT.ReportRunner.run(ReportRunner.java:270) at java.lang.Thread.run(Thread.java:619) Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:509) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:285) at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:255) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:381) at org.apache.commons.beanutils.ConvertUtilsBean.<init>(ConvertUtilsBean.java:157) at org.apache.commons.beanutils.BeanUtilsBean.<init>(BeanUtilsBean.java:117) at org.apache.commons.beanutils.BeanUtilsBean$1.initialValue(BeanUtilsBean.java:68) at org.apache.commons.beanutils.ContextClassLoaderLocal.get(ContextClassLoaderLocal.java:153) at org.apache.commons.beanutils.BeanUtilsBean.getInstance(BeanUtilsBean.java:80) at org.apache.commons.beanutils.PropertyUtilsBean.getInstance(PropertyUtilsBean.java:114) at org.apache.commons.beanutils.PropertyUtils.describe(PropertyUtils.java:209) at org.apache.poi.hssf.usermodel.contrib.HSSFCellUtil.setCellStyleProperty(HSSFCellUtil.java:174) ... 9 more Caused by: org.apache.commons.logging.LogConfigurationException: No suitable Log constructor at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:420) at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactory Impl.java:502) ... 20 more Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/Category at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) at java.lang.Class.getConstructor0(Class.java:2699) at java.lang.Class.getConstructor(Class.java:1657) at org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(LogFactoryImpl.java:417) ... 21 more Caused by: java.lang.ClassNotFoundException: org.apache.log4j.Category at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 26 more

    Read the article

  • cmake doesn't work in windows XP

    - by Runner
    I'm new with cmake,just installed it and following this article: http://www.cs.swarthmore.edu/~adanner/tips/cmake.php D:\Works\c\cmake\build>cmake .. -- Building for: NMake Makefiles CMake Warning at CMakeLists.txt:2 (project): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. -- The C compiler identification is unknown -- The CXX compiler identification is unknown CMake Warning at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/Platform/Windows-cl.cmake:32 (ENABLE_LANGUAGE): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. Call Stack (most recent call first): D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeCInformation.cmake:58 (INCLUDE) CMakeLists.txt:2 (project) CMake Error: your RC compiler: "CMAKE_RC_COMPILER-NOTFOUND" was not found. Please set CMAKE_RC_COMPILER to a valid compiler path or name. -- Check for CL compiler version -- Check for CL compiler version - failed -- Check if this is a free VC compiler -- Check if this is a free VC compiler - yes -- Using FREE VC TOOLS, NO DEBUG available -- Check for working C compiler: cl CMake Warning at CMakeLists.txt:2 (PROJECT): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. CMake Warning at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/Platform/Windows-cl.cmake:32 (ENABLE_LANGUAGE): To use the NMake generator, cmake must be run from a shell that can use the compiler cl from the command line. This environment does not contain INCLUDE, LIB, or LIBPATH, and these must be set for the cl compiler to work. Call Stack (most recent call first): D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeCInformation.cmake:58 (INCLUDE) CMakeLists.txt:2 (PROJECT) CMake Error at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeRCInformation.cmake:22 (GET_FILENAME_COMPONENT): get_filename_component called with incorrect number of arguments Call Stack (most recent call first): D:/Tools/CMake 2.8/share/cmake-2.8/Modules/Platform/Windows-cl.cmake:32 (ENABLE_LANGUAGE) D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeCInformation.cmake:58 (INCLUDE) CMakeLists.txt:2 (PROJECT) CMake Error: CMAKE_RC_COMPILER not set, after EnableLanguage CMake Error: your C compiler: "cl" was not found. Please set CMAKE_C_COMPILER to a valid compiler path or name. CMake Error: Internal CMake error, TryCompile configure of cmake failed -- Check for working C compiler: cl -- broken CMake Error at D:/Tools/CMake 2.8/share/cmake-2.8/Modules/CMakeTestCCompiler.cmake:52 (MESSAGE): The C compiler "cl" is not able to compile a simple test program. It fails with the following output: CMake will not be able to correctly generate this project. Call Stack (most recent call first): CMakeLists.txt:2 (project) CMake Error: your C compiler: "cl" was not found. Please set CMAKE_C_COMPILER to a valid compiler path or name. CMake Error: your CXX compiler: "cl" was not found. Please set CMAKE_CXX_COMPILER to a valid compiler path or name. -- Configuring incomplete, errors occurred! What are the complete requirement to use cmake successfully in windows XP?I've already installed Visual Studio under D:\Tools\Microsoft Visual Studio 9.0

    Read the article

  • MVC 2 with IIS 6 Problems

    - by SlackerCoder
    Hey guys, I'm using IIS 6 on a Windows 2003 Server and I am trying to get an MVC2 project installed on that machine. I am having nightmare-ish problems doing so! I've looked up TONS of references on what to do, and not 1 single one works. (They work for MVC1 projects, as I have a few of those running already using said solutions). Does anyone have any tips/hints/ideas on what needs to be done for MVC2 projects with IIS 6? I am definitely pulling my hair out over this. I have tried it on 2 of my dev servers, and both get the same result. The closest I can get to a served page is an error page "Object reference not set to an instance of an object", however, the page has try/catch blocks that are being ignored, so I dont think its running the code on the controller, I think it's saying that the controller is the error. (For the reference, the error in question is directed at the HomeController.cs file). What I've tried: Wildcard mapping Changing routes to {controller}.mvc Changing routes to {controller}.aspx Adding the .mvc extension to IIS Modifying routes in Global.asax There's a LOT of code in this project so far, so I will only post the first page(s) that should get served: MASTER PAGE: <div class="page"> <div id="header"> <div id="title"> <h1>Meritain RedCard Interface 2.0</h1> </div> <!-- This is the main menu. Each security role will have access to certain buttons. --> <div id="menucontainer"> <% if (Session["UserData"] != null) { %> <% if (/*User Security Checks Out*/) { %> <ul id="menu"> <li><%= Html.ActionLink("Home", "Index", "Home")%></li> <li><%= Html.ActionLink("Selection", "Index", "Select", new { area = "Selector" }, null)%></li> <li><%= Html.ActionLink("Audit", "Index", "Audit", new { area = "Auditor" }, null)%></li> <li><%= Html.ActionLink("Setup", "Index", "Setup", new { area = "Setup" }, null)%></li> <li><%= Html.ActionLink("About", "About", "Home")%></li> </ul> <% } %> <% } %> </div> </div> <div id="main"> <asp:ContentPlaceHolder ID="MainContent" runat="server" /> <div id="footer"> </div> </div> </div> Default.aspx.cs: [I added this file as a potential solution, since it works with MVC 1] protected void Page_Load(object sender, EventArgs e) { string originalPath = Request.Path; HttpContext.Current.RewritePath(Request.ApplicationPath, false); IHttpHandler httpHandler = new MvcHttpHandler(); httpHandler.ProcessRequest(HttpContext.Current); HttpContext.Current.RewritePath(originalPath, false); } HomeController.cs: public ActionResult Index() { loadApplication(); ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View(); } public ActionResult About() { return View(); } private void loadApplication() { Session["UserData"] = CreateUserSecurity(HttpContext.User.Identity.Name.ToString()); } I did not list the CreateUserSecurity method, but all it does it call the DB using the Username and returns the record in the database that matches the username. EDIT: Added code and what I've tried so far (as requested).

    Read the article

  • Debugging Silverlight crash

    - by RHLopez
    I am trying to debug an IE 8 crash caused by a Silverlight application. I managed to find some articles on how to do a memory dump when a process crashes. I loaded the dump in windbg and ran !analyze -v. Below is the result. I am stuck at what further steps I can take to figure out what module or library that is running in Silverlight is causing the crash. So all I have right now is the crash in IE is caused by an Access violation (attempt to execute non-executable address) and from what is in the stack trace that some animation is running in Silverlight. Any tips or articles that would help me debug this will be appreciated. This dump file has an exception of interest stored in it. The stored exception information can be accessed via .ecxr. (1864.1560): Access violation - code c0000005 (first/second chance not available) eax=00000000 ebx=00000000 ecx=1b11fc58 edx=5c6f007d esi=00000000 edi=193b8e08 eip=00000000 esp=0f61f750 ebp=0f61f76c iopl=0 nv up ei pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010206 00000000 ?? ??? FAULTING_IP: +56b3952f04ebde68 748bc9f1 654c dec esp EXCEPTION_RECORD: ffffffff -- (.exr 0xffffffffffffffff) ExceptionAddress: 748bc9f1 ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 00000008 Parameter[1]: 00000000 Attempt to execute non-executable address 00000000 PROCESS_NAME: iexplore.exe ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. EXCEPTION_PARAMETER1: 00000008 EXCEPTION_PARAMETER2: 00000000 WRITE_ADDRESS: 00000000 FOLLOWUP_IP: agcore!CFrameworkElement::SetValue+1d7 5c704fa8 84c0 test al,al FAILED_INSTRUCTION_ADDRESS: +56b3952f04ebde68 748bc9f1 654c dec esp NTGLOBALFLAG: 0 APPLICATION_VERIFIER_FLAGS: 0 FAULTING_THREAD: 00001560 BUGCHECK_STR: APPLICATION_FAULT_SOFTWARE_NX_FAULT_NULL PRIMARY_PROBLEM_CLASS: SOFTWARE_NX_FAULT_NULL DEFAULT_BUCKET_ID: SOFTWARE_NX_FAULT_NULL LAST_CONTROL_TRANSFER: from 5c704fa8 to 00000000 STACK_TEXT: WARNING: Frame IP not in any known module. Following frames may be wrong. 0f61f74c 5c704fa8 1b17a134 193b8e08 0e690e14 0x0 0f61f76c 5c712360 0e690e14 1b17a134 0e690e14 agcore!CFrameworkElement::SetValue+0x1d7 0f61f788 5c7123a8 0e690e14 1b17a134 0e690e14 agcore!CShape::SetValue+0x72 0f61f7a0 5c70a6ff 0e690e14 1b17a134 00000000 agcore!CEllipse::SetValue+0x3b 0f61f7d0 5c752c2b 1b17a090 193b8e08 00000000 agcore!CAnimation::DoSetValue+0x50 0f61f810 5c7a7fb1 0f61f884 0f61f868 1b17a090 agcore!CAnimation::UpdateAnimationUsingKeyFrames+0x3b5 0f61f82c 5c707146 00000000 00000000 00000000 agcore!CAnimation::UpdateAnimation+0x184 0f61f87c 5c7071e5 3e4c8000 0f61f8cc 00000000 agcore!CTimeline::ComputeState+0x13a 0f61f89c 5c706d49 193f82b0 0f61f8cc 0f61f8d4 agcore!CTimelineGroup::ComputeState+0x8c 0f61f8ac 5c7069c7 3e4c8000 0f61f8cc 0b111f60 agcore!CStoryboard::ComputeState+0x48 0f61f8d4 5c706a29 0e6a0ca0 00000000 0e490070 agcore!CTimeManager::Tick+0x79 0f61f8e8 5c78f960 0b0e6d68 0f61f990 00000000 agcore!CCoreServices::Tick+0x21 0f61f940 5c706ac2 0b111f60 0e42ca08 ffffffff agcore!CCoreServices::Draw+0x140 0f61f964 67ac141c 0af99b90 00000000 0f61f990 agcore!CCoreServices::Draw+0x2d 0f61f9b4 67a933c2 0f61f9c8 00000000 00000000 npctrl!CXcpBrowserHost::OnTick+0x1b1 0f61f9e0 67a927c6 0064069c 00000402 00000000 npctrl!CXcpDispatcher::Tick+0xf3 0f61fa08 67a92709 0064069c 00000402 00000000 npctrl!CXcpDispatcher::OnReentrancyProtectedWindowMessage+0xcd 0f61fa28 764b6238 0064069c 00000402 00000000 npctrl!CXcpDispatcher::WindowProc+0xb8 0f61fa54 764b68ea 67a9269d 0064069c 00000402 user32!InternalCallWinProc+0x23 0f61facc 764b7d31 00000000 67a9269d 0064069c user32!UserCallWinProcCheckWow+0x109 0f61fb2c 764b7dfa 67a9269d 00000000 0f61fbb4 user32!DispatchMessageWorker+0x3bc 0f61fb3c 6fe504a6 0f61fb54 00000000 0ab11908 user32!DispatchMessageW+0xf 0f61fbb4 6fe60446 0af956a0 00000000 0b18a338 ieframe!CTabWindow::_TabWindowThreadProc+0x452 0f61fc6c 769d49bd 0ab11908 00000000 0f61fc88 ieframe!LCIETab_ThreadProc+0x2c1 0f61fc7c 76e53677 0b18a338 0f61fcc8 77829d72 iertutil!CIsoScope::RegisterThread+0xab 0f61fc88 77829d72 0b18a338 7dbc895d 00000000 kernel32!BaseThreadInitThunk+0xe 0f61fcc8 77829d45 769d49af 0b18a338 00000000 ntdll!__RtlUserThreadStart+0x70 0f61fce0 00000000 769d49af 0b18a338 00000000 ntdll!_RtlUserThreadStart+0x1b SYMBOL_STACK_INDEX: 1 SYMBOL_NAME: agcore!CFrameworkElement::SetValue+1d7 FOLLOWUP_NAME: MachineOwner MODULE_NAME: agcore IMAGE_NAME: agcore.dll DEBUG_FLR_IMAGE_TIMESTAMP: 4a67e422 STACK_COMMAND: ~44s; .ecxr ; kb FAILURE_BUCKET_ID: SOFTWARE_NX_FAULT_NULL_c0000005_agcore.dll!CFrameworkElement::SetValue BUCKET_ID: APPLICATION_FAULT_SOFTWARE_NX_FAULT_NULL_BAD_IP_agcore!CFrameworkElement::SetValue+1d7

    Read the article

  • BizTalk server problem

    - by WtFudgE
    Hi, we have a biztalk server (a virtual one (1!)...) at our company, and an sql server where the data is being kept. Now we have a lot of data traffic. I'm talking about hundred of thousands. So I'm actually not even sure if one server is pretty safe, but our company is not that easy to convince. Now recently we have a lot of problems. Allow me to situate in detail, so I'm not missing anything: Our server has 5 applications: One with 3 orchestrations, 12 send ports, 16 receive locations. One with 4 orchestrations, 32 send ports, 20 receive locations. One with 4 orchestrations, 24 send ports, 20 receive locations. One with 47 (yes 47) orchestrations, 37 send ports, 6 receive locations. One with common application with a couple of resources. Our problems have occured since we deployed the applications with the 47 orchestrations. A lot of these orchestrations use assign shapes which use c# code to do the mapping. This is because we use HL7 extensions and this is kind of special, so by using c# code & xpath it was a lot easier to do the mapping because a lot of these schema's look alike. The c# reads in XmlNodes received through xpath, and returns XmlNode which are then assigned again to biztalk messages. I'm not sure if this could be the cause, but I thought I'd mention it. The send and receive ports have a lot of different types: File, MQSeries, SQL, MLLP, FTP. Each of these types have a different host instances, to balance out the load. Our orchestrations use the BiztalkApplication host. On this server also a couple of scripts are running, mostly ftp upload scripts & also a zipper script, which zips files every half an hour in a daily zip and deletes the zip files after a month. We use this zipscript on our backup files (we backup a lot, backups are also on our server), we did this because the server had problems with sending files to a location where there were a lot (A LOT) of files, so after the files were reduced to zips it went better. Now the problems we are having recently are mainly two major problems: Our most important problem is the following. We kept a receive location with a lot of messages on a queue for testing. After we start this receive location which uses the 47 orchestrations, the running service instances start to sky rock. Ok, this is pretty normal. Let's say about 10000, and then we stop the receive location to see how biztalk handles these 10000 instances. Normally they would go down pretty fast, and it does sometimes, but after a while it starts to "throttle", meaning they just stop being processed and the service instances stay at the same number, for example in 30 seconds it goes down from 10000 to 4000 and then it stays at 4000 and it lowers very very very slowly, like 30 in 5minutes or something. So this means, that all the other service instances of the other applications are also stuck in here, and they are also not processed. We noticed that after restarting our host instances the instance number went down fast again. So we tried to selectively restart different host instances to locate the problem. We noticed that eventually restarting the file send/receive host instance would do the trick. So we thought file sends would be the problem. Concidering that we make a lot of backups. So we replaced the file type backups with mqseries backups. The same problem occured, and funny thing, restarting the file send/receive host still fixes the problem. No errors can be found in the event viewer either. A second problem we're having is. That sometimes at arround 6 am, all or a part of the host instances are being stopped. In the event viewer we noticed the following errors (these are more than one): The receive location "MdnBericht SQL" with URL "SQL://ZNACDBPEG/mdnd0001/" is shutting down. Details:"The error threshold has been exceeded. The receive location is shutting down.". The Messaging Engine failed to add a receive location "M2m Othello Export Start Bestand" with URL "\m2mservices\Othello_import$\DataFilter Start*.xml" to the adapter "FILE". Reason: "The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. ". The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. An attempt to connect to "BizTalkMsgBoxDb" SQL Server database on server "ZNACDBBTS" failed. Error: "Login failed for user ''. The user is not associated with a trusted SQL Server connection." It woould seem that there's a login failure at this time and that because of it other services are also experiencing problems, and eventually they are shut down. The thing is, our user is admin, and it's impossible that it's password is wrong "sometimes". We have concidering that the problem could be due to an infrastructure problem, but that's not really are department. I know it's a long post, but we're not sure anymore what to do. Would adding another server and balancing the load solve our problems? Is there a way to meassure our balance and know where to start splitting? What are normal numbers of load etc? I appreciate any answers because these issues are getting worse and we're also on a deadline. Thanks a lot for replies!

    Read the article

  • Creating static NAT blocks outbound traffic Cisco ASA

    - by natediggs
    Hi Everyone, I have two web servers sitting behind a Cisco ASA 5505, which I don't have much experience with. I'm trying to create two static NATs. One static NAT that goes to xx.xx.xx.150 and another that goes to xx.xx.xx.151. I've created the static NAT for the .150 web server and it works FINE. Incoming and outgoing traffic work great. This is the staging web server. I now need to duplicate the setup for the production web server. So, I connect the webserver to the firewall, change the public IP address on one of the NICs reboot the server and I have outbound internet access. Then I run the command: static (inside,outside) xx.xx.xx.150 192.168.1.x which is successful. I then run the command: access-list acl-outside permit tcp any host xx.xx.xx.150 eq 80 Which is successful. I then try to browse the internet and I get nothing. I try to telnet in through port 80 and I get nothing (though I'm guessing because the response to the telnet request is being blocked). I've tried this with the production web server and then I tried it with another web server that is for internal testing and have the exact same problem. Both work fine until I run the static NAT rule and then no outbound internet access. I have a feeling that it's something simple that I'm missing, but my limited experience with this device is killing me. Below I've pasted the current configuration. I'm currently trying to get this to work on the .153 server which is the internal testing server. Once I can verify that works, I'll try it with production. : Saved : ASA Version 8.2(4) ! hostname QG domain-name XX.com enable password passwd names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.1.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address XX.XX.XX.148 255.255.255.0 ! interface Vlan3 shutdown no forward interface Vlan1 nameif dmz security-level 50 ip address dhcp ! boot system disk0:/asa824.bin ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns server-group DefaultDNS domain-name fw.XXgroup.com same-security-traffic permit inter-interface access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.150 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq www access-list acl-outside extended permit tcp any host XX.XX.XX.151 eq https access-list acl-outside extended permit tcp any host XX.XX.XX.153 eq www access-list inside_access_in extended permit ip 192.168.1.0 255.255.255.0 any access-list inside_nat0_outbound extended permit ip any 192.168.1.32 255.255.255.240 pager lines 24 logging enable logging asdm informational mtu inside 1500 mtu outside 1500 mtu dmz 1500 ip local pool VPNIPs 192.168.1.35-192.168.1.44 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-635.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) XX.XX.XX150 192.168.1.100 netmask 255.255.255.255 static (inside,outside) XX.XX.XX153 192.168.1.102 netmask 255.255.255.255 access-group acl-outside in interface outside route outside 0.0.0.0 0.0.0.0 XX.XX.XX129 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authorization command LOCAL http server enable http 192.168.1.0 255.255.255.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group1 crypto dynamic-map outside_dyn_map 20 set transform-set ESP-3DES-SHA crypto map outside_map 65535 ipsec-isakmp dynamic outside_dyn_map crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication crack encryption 3des hash sha group 2 lifetime 86400 no crypto isakmp nat-traversal client-update enable telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! dhcpd address 192.168.1.2-192.168.1.33 inside dhcpd dns 208.77.88.4 interface inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn enable outside svc image disk0:/sslclient-win-1.1.0.154.pkg 1 svc image disk0:/anyconnect-win-2.5.2019-k9.pkg 2 svc enable group-policy ATSAdmin internal group-policy ATSAdmin attributes dns-server value 208.77.88.4 208.85.174.9 vpn-tunnel-protocol IPSec svc webvpn webvpn url-list none svc keep-installer installed svc rekey method ssl svc ask enable username qgadmin password /oHfeGQ/R.bd3KPR encrypted privilege 15 username benl password 0HNIGQNI0uruJvhW encrypted privilege 0 username benl attributes vpn-group-policy ATSAdmin username kuzma password rH7MM7laoynyvf9U encrypted privilege 0 username kuzma attributes vpn-group-policy ATSAdmin username nate password BXHOURyT37e4O5mt encrypted privilege 0 username nate attributes vpn-group-policy ATSAdmin tunnel-group ATSAdmin type remote-access tunnel-group ATSAdmin general-attributes address-pool VPNIPs default-group-policy ATSAdmin tunnel-group SSLVPN type remote-access tunnel-group SSLVPN general-attributes address-pool VPNIPs default-group-policy ATSAdmin ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global privilege cmd level 3 mode exec command perfmon privilege cmd level 3 mode exec command ping privilege cmd level 3 mode exec command who privilege cmd level 3 mode exec command logging privilege cmd level 3 mode exec command failover privilege show level 5 mode exec command running-config privilege show level 3 mode exec command reload privilege show level 3 mode exec command mode privilege show level 3 mode exec command firewall privilege show level 3 mode exec command interface privilege show level 3 mode exec command clock privilege show level 3 mode exec command dns-hosts privilege show level 3 mode exec command access-list privilege show level 3 mode exec command logging privilege show level 3 mode exec command ip privilege show level 3 mode exec command failover privilege show level 3 mode exec command asdm privilege show level 3 mode exec command arp privilege show level 3 mode exec command route privilege show level 3 mode exec command ospf privilege show level 3 mode exec command aaa-server privilege show level 3 mode exec command aaa privilege show level 3 mode exec command crypto privilege show level 3 mode exec command vpn-sessiondb privilege show level 3 mode exec command ssh privilege show level 3 mode exec command dhcpd privilege show level 3 mode exec command vpn privilege show level 3 mode exec command blocks privilege show level 3 mode exec command uauth privilege show level 3 mode configure command interface privilege show level 3 mode configure command clock privilege show level 3 mode configure command access-list privilege show level 3 mode configure command logging privilege show level 3 mode configure command ip privilege show level 3 mode configure command failover privilege show level 5 mode configure command asdm privilege show level 3 mode configure command arp privilege show level 3 mode configure command route privilege show level 3 mode configure command aaa-server privilege show level 3 mode configure command aaa privilege show level 3 mode configure command crypto privilege show level 3 mode configure command ssh privilege show level 3 mode configure command dhcpd privilege show level 5 mode configure command privilege privilege clear level 3 mode exec command dns-hosts privilege clear level 3 mode exec command logging privilege clear level 3 mode exec command arp privilege clear level 3 mode exec command aaa-server privilege clear level 3 mode exec command crypto privilege cmd level 3 mode configure command failover privilege clear level 3 mode configure command logging privilege clear level 3 mode configure command arp privilege clear level 3 mode configure command crypto privilege clear level 3 mode configure command aaa-server prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily Cryptochecksum:0ed0580e151af288d865f4f3603d792a : end asdm image disk0:/asdm-635.bin no asdm history enable

    Read the article

  • optimizing iPhone OpenGL ES fill rate

    - by NateS
    I have an Open GL ES game on the iPhone. My framerate is pretty sucky, ~20fps. Using the Xcode OpenGL ES performance tool on an iPhone 3G, it shows: Renderer Utilization: 95% to 99% Tiler Utilization: ~27% I am drawing a lot of pretty large images with a lot of blending. If I reduce the number of images drawn, framerates go from ~20 to ~40, though the performance tool results stay about the same (renderer still maxed). I think I'm being limited by the fill rate of the iPhone 3G, but I'm not sure. My questions are: How can I determine with more granularity where the bottleneck is? That is my biggest problem, I just don't know what is taking all the time. If it is fillrate, is there anything I do to improve it besides just drawing less? I am using texture atlases. I have tried to minimize image binds, though it isn't always possible (drawing order, not everything fits on one 1024x1024 texture, etc). Every frame I do 10 image binds. This seem pretty reasonable, but I could be mistaken. I'm using vertex arrays and glDrawArrays. I don't really have a lot of geometry. I can try to be more precise if needed. Each image is 2 triangles and I try to batch things were possible, though often (maybe half the time) images are drawn with individual glDrawArrays calls. Besides the images, I have ~60 triangles worth of geometry being rendered in ~6 glDrawArrays calls. I often glTranslate before calling glDrawArrays. Would it improve the framerate to switch to VBOs? I don't think it is a huge amount of geometry, but maybe it is faster for other reasons? Are there certain things to watch out for that could reduce performance? Eg, should I avoid glTranslate, glColor4g, etc? I'm using glScissor in a 3 places per frame. Each use consists of 2 glScissor calls, one to set it up, and one to reset it to what it was. I don't know if there is much of a performance impact here. If I used PVRTC would it be able to render faster? Currently all my images are GL_RGBA. I don't have memory issues. Here is a rough idea of what I'm drawing, in this order: 1) Switch to perspective matrix. 2) Draw a full screen background image 3) Draw a full screen image with translucency (this one has a scrolling texture). 4) Draw a few sprites. 5) Switch to ortho matrix. 6) Draw a few sprites. 7) Switch to perspective matrix. 8) Draw sprites and some other textured geometry. 9) Switch to ortho matrix. 10) Draw a few sprites (eg, game HUD). Steps 1-6 draw a bunch of background stuff. 8 draws most of the game content. 10 draws the HUD. As you can see, there are many layers, some of them full screen and some of the sprites are pretty large (1/4 of the screen). The layers use translucency, so I have to draw them in back-to-front order. This is further complicated by needing to draw various layers in ortho and others in perspective. I will gladly provide additional information if reqested. Thanks in advance for any performance tips or general advice on my problem!

    Read the article

  • ADO.NET (WCF) Data Services Query Interceptor Hangs IIS

    - by PreMagination
    I have an ADO.NET Data Service that's supposed to provide read-only access to a somewhat complex database. Logically I have table-per-type (TPT) inheritance in my data model but the EDM doesn't implement inheritance. (Limitation of EF and navigation properties on derived types. STILL not fixed in EF4!) I can query my EDM directly (using a separate project) using a copy of the query I'm trying to run against the web service, results are returned within 10 seconds. Disabling the query interceptors I'm able to make the same query against the web service, results are returned similarly quickly. I can enable some of the query interceptors and the results are returned slowly, up to a minute or so later. Alternatively, I can enable all the query interceptors, expand less of the properties on the main object I'm querying, and results are returned in a similar period of time. (I've increased some of the timeout periods) Up til this point Sql Profiler indicates the slow-down is the database. (That's a post for a different day) But when I enable all my query interceptors and expand all the properties I'd like to have the IIS worker process pegs the CPU for 20 minutes and a query is never even made against the database. This implies to me that yes, my implementation probably sucks but regardless the Data Services "tier" is having an issue it shouldn't. WCF tracing didn't reveal anything interesting to my untrained eye. Details: Data model: Agent-Person-Student Student has a collection of referrals Students and referrals are private, queries against the web service should only return "your" students and referrals. This means Person and Agent need to be filtered too. Other entities (Agent-Organization-School) can be accessed by anyone who has authenticated. The existing security model is poorly suited to perform this type of filtering for this type of data access, the query interceptors are complicated and cause EF to generate some entertaining sql queries. Sample Interceptor [QueryInterceptor("Agents")] public Expression<Func<Agent, Boolean>> OnQueryAgents() { //Agent is a Person(1), Educator(2), Student(3), or Other Person(13); allow if scope permissions exist return ag => (ag.AgentType.AgentTypeId == 1 || ag.AgentType.AgentTypeId == 2 || ag.AgentType.AgentTypeId == 3 || ag.AgentType.AgentTypeId == 13) && ag.Person.OrganizationPersons.Count<OrganizationPerson>(op => op.Organization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124) || op.Organization.HierarchyDescendents.Any<OrganizationsHierarchy>(oh => oh.AncestorOrganization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124))) > 0; } The query interceptors for Person, Student, Referral are all very similar, ie they traverse multiple same/similar tables to look for ScopePermissions as above. Sample Query var referrals = (from r in service.Referrals .Expand("Organization/ParentOrganization") .Expand("Educator/Person/Agent") .Expand("Student/Person/Agent") .Expand("Student") .Expand("Grade") .Expand("ProblemBehavior") .Expand("Location") .Expand("Motivation") .Expand("AdminDecision") .Expand("OthersInvolved") where r.DateCreated >= coupledays && r.DateDeleted == null select r); Any suggestions or tips would be greatly associated, for fixing my current implementation or in developing a new one, with the caveat that the database can't be changed and that ultimately I need to expose a large portion of the database via a web service that limits data access to the data authorized for, for the purpose of data integration with multiple outside parties. THANK YOU!!!

    Read the article

  • Game loop and time tracking

    - by David Brown
    Maybe I'm just an idiot, but I've been trying to implement a game loop all day and it's just not clicking. I've read literally every article I could find on Google, but the problem is that they all use different timing mechanisms, which makes them difficult to apply to my particular situation (some use milliseconds, other use ticks, etc). Basically, I have a Clock object that updates each time the game loop executes. internal class Clock { public static long Timestamp { get { return Stopwatch.GetTimestamp(); } } public static long Frequency { get { return Stopwatch.Frequency; } } private long _startTime; private long _lastTime; private TimeSpan _totalTime; private TimeSpan _elapsedTime; /// <summary> /// The amount of time that has passed since the first step. /// </summary> public TimeSpan TotalTime { get { return _totalTime; } } /// <summary> /// The amount of time that has passed since the last step. /// </summary> public TimeSpan ElapsedTime { get { return _elapsedTime; } } public Clock() { Reset(); } public void Reset() { _startTime = Timestamp; _lastTime = 0; _totalTime = TimeSpan.Zero; _elapsedTime = TimeSpan.Zero; } public void Tick() { long currentTime = Timestamp; if (_lastTime == 0) _lastTime = currentTime; _totalTime = TimestampToTimeSpan(currentTime - _startTime); _elapsedTime = TimestampToTimeSpan(currentTime - _lastTime); _lastTime = currentTime; } public static TimeSpan TimestampToTimeSpan(long timestamp) { return TimeSpan.FromTicks( (timestamp * TimeSpan.TicksPerSecond) / Frequency); } } I based most of that on the XNA GameClock, but it's greatly simplified. Then, I have a Time class which holds various times that the Update and Draw methods need to know. public class Time { public TimeSpan ElapsedVirtualTime { get; internal set; } public TimeSpan ElapsedRealTime { get; internal set; } public TimeSpan TotalVirtualTime { get; internal set; } public TimeSpan TotalRealTime { get; internal set; } internal Time() { } internal Time(TimeSpan elapsedVirtualTime, TimeSpan elapsedRealTime, TimeSpan totalVirutalTime, TimeSpan totalRealTime) { ElapsedVirtualTime = elapsedVirtualTime; ElapsedRealTime = elapsedRealTime; TotalVirtualTime = totalVirutalTime; TotalRealTime = totalRealTime; } } My main class keeps a single instance of Time, which it should constantly update during the game loop. So far, I have this: private static void Loop() { do { Clock.Tick(); Time.TotalRealTime = Clock.TotalTime; Time.ElapsedRealTime = Clock.ElapsedTime; InternalUpdate(Time); InternalDraw(Time); } while (!_exitRequested); } The real time properties of the time class turn out great. Now I'd like to get a proper update/draw loop working so that the state is updated a variable number of times per frame, but at a fixed timestep. At the same time, the Time.TotalVirtualTime and Time.ElapsedVirtualTime should be updated accordingly. In addition, I intend for this to support multiplayer in the future, in case that makes any difference to the design of the game loop. Any tips or examples on how I could go about implementing this (aside from links to articles)?

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326  | Next Page >