Search Results

Search found 8875 results on 355 pages for 'optimized solutions'.

Page 161/355 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • Disneyland Inside Out on iPhone and Android

    - by Ryan Cain
    It's hard to believe October was the last time I was over here on my blog.  Ironically after getter the developer phone from Microsoft I have been knee deep in iPhone programming and for the past few weeks Android programming again.  This time I've spent all my non-working hours programming a fun project for my "other" website, Disneyland Inside Out.  Disneyland Inside Out, a vacation planning site for Disneyland in California, has been around in various forms since June 1996.  It has always been a place for me to explore new technologies and learn about some of the new trends on the web.  I recently migrated the site over to DotNetNuke and have been building out custom modules for DNN.  I've also been hacking things together w/ the URLRewrite module in IIS 7.5 to provide strong SEO optimized URLs.  I can't say all that has really stuck within the DNN model of doing things, but it has worked pretty well. As part of my learning process, I spent most of the Fall bringing Disneyland Inside Out to the iPhone.  I will post more details on my development experiences later.  But this project gave me a really great opportunity to get a good feel for Objective-C development.  After 3 months I actually feel somewhat competent in the language and iPhone SDK, instead of just floundering around getting things to work.  The project also gave me a chance to play with some new frameworks on the iPhone and really dig into the Facebook SDK.  I also dug into some of the Gowalla REST api's as well.  We've been live with the app in iTunes for just about 10 days now, and have been sitting in the top 200 of free travel apps for the past few days.  You can get more info and the direct iTunes download link on our site: Disneyland Inside Out for iPhone Since launching the iPhone version I have gotten back into Android development, porting the Disneyland Inside Out app over to Android.  As I said in my first review of iPhone vs. Android, coming from a managed code background, Android is much easier to get going with.  I just about 3 weeks total I will have about 85 - 90% of the functionality up and running in the Android app, that took probably 1.5 - 2x's that time for iPhone.  That isn't a totally fair comparison as I am much more comfortable w/ Xcode and Objective-C today and can get some of the basic stuff done much faster than I could in the fall.  Though I'd say some of the hardest code to debug is still the null pointer issues on objects that were dealloc'd too early in Objective-C.  This isn't too bad with the NSZoombies enabled for synchronous code, but when you have a lot of async, which my app does, it can be hairy at times to track exactly what was causing the issue.   I will post more details later, as I am trying to wrap up a beta of the Android app today.  But in the meantime, if you have an iPhone, iPod Touch or iPad head on over to the site and take a look at my app.

    Read the article

  • At the Java DEMOgrounds - Oracle Java ME Embedded Enables the “Internet of Things”

    - by Janice J. Heiss
    I caught up with Oracle’s Robert Barnes, Senior Director, Java Product Management, who was demonstrating a new product from Oracle’s Java Platform, Micro Edition (Java ME) product portfolio, Oracle Java ME Embedded 3.2, a complete client Java runtime optimized for microcontrollers and other resource-constrained devices. Oracle’s Java ME Embedded 3.2 is a Java ME runtime based on CLDC 1.1 (JSR-139) and IMP-NG (JSR-228).“What we are showing here is the Java ME Embedded 3.2 that we announced last week,” explained Barnes. “It’s the start of the 'Internet of Things,’ in which you have very very small devices that are on the edge of the network where the sensors sit. You often have a middle area called a gateway or a concentrator which is fairly middle to higher performance. On the back end you have a very high performance server. What this is showing is Java spanning all the way from the server side right down towards the type of chip that you will get at the sensor side as the network.” Barnes explained that he had two different demos running.The first, called the Solar Panel System Demo, measures the brightness of the light.  “This,” said Barnes, “is a light source demo with a Cortex M3 controlling the motor, on the end of which is a sensor which is measuring the brightness of the lamp. This is recording the data of the brightness of the lamp and as we move the lamp out of the way, we should be able using the server to turn the sensor towards the lamp so the brightness reading will go higher. This sends the message back to the server and we can look at the web server sitting on the PC underneath the desk. We can actually see the data being passed back effectively through a back office type of function within a utility environment.” The second demo, the Smart Grid Response Demo, Barnes explained, “has the same board and processor and is still using Java ME embedded with a different app on top. This is a demand response demo. What we are seeing within the managing environment is that people want to track the pricing signals of the electricity. If it’s particularly expensive at any point in time, they may turn something off. This demo sets the price of the electricity as though this is coming from the back of the server sending pricing signals to my home.” The demo had a lamp and a fan and it was tracking the price of electricity. “If I set the price of the electricity to go over 5 cents, then the device will turn off,” explained Barnes. “I can go into my settings and, in this case, change the price to 50 cents and we can wait a minus and the lamp will go off. When I change the pricing signal so that it is lower, the lamp will come back on. The key point is that the Java software we have running is the same across all the different devices; it’s a way to build applications across multiple devices using the same software. This is important because it fixes peak loading on the network and can stops blackouts.” This demo brought me back to a prior decade when Sun Microsystems first promoted  Jini technology, a version of Java that would put everything on the network and give us the smart home. Your home would be automated to tell you when you were out of milk, when to change your light bulbs, etc. You would have access to the web and the network throughout your home.It’s interesting to see how technology moves over time – from the smart home to the Internet of Things.

    Read the article

  • PowerShell Script To Find Where SharePoint 2010 Features Are Activated

    - by Brian Jackett
    The script on this post will find where features are activated within your SharePoint 2010 farm.   Problem    Over the past few months I’ve gotten literally dozens of emails, blog comments, or personal requests from people asking “how do I find where a SharePoint feature has been activated?”  I wrote a script to find which features are installed on your farm almost 3 years ago.  There is also the Get-SPFeature PowerShell commandlet in SharePoint 2010.  The problem is that these only tell you if a feature is installed not where they have been activated.  This is especially important to know if you have multiple web applications, site collections, and /or sites.   Solution    The default call (no parameters) for Get-SPFeature will return all features in the farm.  Many of the parameter sets accept filters for specific scopes such as web application, site collection, and site.  If those are supplied then only the enabled / activated features are returned for that filtered scope.  Taking the concept of recursively traversing a SharePoint farm and merging that with calls to Get-SPFeature at all levels of the farm you can find out what features are activated at that level.  Store the results into a variable and you end up with all features that are activated at every level.    Below is the script I came up with (slight edits for posting on blog).  With no parameters the function lists all features activated at all scopes.  If you provide an Identity parameter you will find where a specific feature is activated.  Note that the display name for a feature you see in the SharePoint UI rarely matches the “internal” display name.  I would recommend using the feature id instead.  You can download a full copy of the script by clicking on the link below.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first.   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(   [Parameter(position = 1, valueFromPipeline=$true)]   [Microsoft.SharePoint.PowerShell.SPFeatureDefinitionPipeBind]   $Identity )#end param   Begin   {     # declare empty array to hold results. Will add custom member `     # for Url to show where activated at on objects returned from Get-SPFeature.     $results = @()         $params = @{}   }   Process   {     if([string]::IsNullOrEmpty($Identity) -eq $false)     {       $params = @{Identity = $Identity             ErrorAction = "SilentlyContinue"       }     }       # check farm features     $results += (Get-SPFeature -Farm -Limit All @params |              % {Add-Member -InputObject $_ -MemberType noteproperty `                 -Name Url -Value ([string]::Empty) -PassThru} |              Select-Object -Property Scope, DisplayName, Id, Url)     # check web application features     foreach($webApp in (Get-SPWebApplication))     {       $results += (Get-SPFeature -WebApplication $webApp -Limit All @params |                % {Add-Member -InputObject $_ -MemberType noteproperty `                   -Name Url -Value $webApp.Url -PassThru} |                Select-Object -Property Scope, DisplayName, Id, Url)       # check site collection features in current web app       foreach($site in ($webApp.Sites))       {         $results += (Get-SPFeature -Site $site -Limit All @params |                  % {Add-Member -InputObject $_ -MemberType noteproperty `                     -Name Url -Value $site.Url -PassThru} |                  Select-Object -Property Scope, DisplayName, Id, Url)                          $site.Dispose()         # check site features in current site collection         foreach($web in ($site.AllWebs))         {           $results += (Get-SPFeature -Web $web -Limit All @params |                    % {Add-Member -InputObject $_ -MemberType noteproperty `                       -Name Url -Value $web.Url -PassThru} |                    Select-Object -Property Scope, DisplayName, Id, Url)           $web.Dispose()         }       }     }   }   End   {     $results   } } #end Get-SPFeatureActivated   Snippet of output from Get-SPFeatureActivated   Conclusion    This script has been requested for a long time and I’m glad to finally getting a working “clean” version.  If you find any bugs or issues with the script please let me know.  I’ll be posting this to the TechNet Script Center after some internal review.  Enjoy the script and I hope it helps with your admin / developer needs.         -Frog Out

    Read the article

  • Where should you put constants and why?

    - by Tim Meyer
    In our mostly large applications, we usually have a only few locations for constants: One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes. In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once. Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-)

    Read the article

  • Gaming on Cloud

    - by technomad
    Sometimes I wonder the pundits of cloud computing are way to consumed with the enterprise applications. With all the CAPEX / OPEX, ROI-talk taking the center stage, an opportunity to affect masses directly is getting overlooked. I am a self proclaimed die hard gamer. I come from the generation of gamers who started their journey in DOS games like Wolfenstein 3D and Allan Border Cricket (the latter is still a favorite pastime). In the late 90s, a revolution called accelerated graphics started in DirectX and OpenGL. Games got more advanced. Likes of Quake III and Unreal Tournament became the crown jewels of the industry. But with all these advancements, there started a race. A race of GFX giants ATI and NVIDIA to beat each other for better frame and image quality. Revisions to the graphics chipsets became frequent. Games became eye candies but at the cost of more GPU power / memory. Every eagerly awaited title started demanding more muscle power in graphics and PC hardware. Latest games and all the liquid smooth frame rates became the territory of the once with deep pockets who could spend lavishly on latest hardware. Enthusiasts like yours truly, who couldn’t afford this route, started exploring over-clocking, optimized hardware cooling... etc. to pursue the passion. Ever rising cost of hardware requirements lead to rampant piracy of PC games. Gamers were willing to spend on the latest titles, but the ones with tight budget prefer hardware upgrades against a legal copy of the game. It was also fueled by emergence of the P2P file sharing networks. Then came the era of Xbox and PS3s. It solved the major issue of hardware standardization and provided an alternative to ever increasing hardware costs. I have always admired these consoles, but being born and brought up in a keyboard/mouse environment, I still find it difficult to play first person shooters with a gamepad. I leave the topic of PC v/s Consol gaming for another day, but the bottom line is… PC gamers deserve an equally democratized solution. This is where I think Cloud Computing can come to rescue. It can minimize hardware requirements. Virtually end the software piracy and rationalize costs for gamers. Subscription based models like pay-as-you-play. In game rewards, like extended subscription credits for exceptional gamers (oh yes, I have beaten Xaero on nightmare in Quake III, time and again!) Easy deployment for patches and fixes. Better game AI. The list goes on and on… Fortunately, companies like OnLive are thinking in the same direction. Their gaming service is all set to launch on 17th June 2010 in E3 2010 expo in L.A. I wish them all the luck. I hope they will start a trend which will bring the smiles back on the face of budget gamers with the help of cloud computing.

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • Apps UX Launches Blueprints for Mobile User Experiences

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceAt Oracle OpenWorld 2012 this year, the Oracle Applications User Experience (Apps UX) team announced the release of Mobile User Experience Functional Design Patterns. These patterns are designed to work directly with Oracle’s Fusion Middleware, specifically, ADF Mobile.  The Oracle Application Development Framework for mobile users enables developers to build one application that can be deployed to multiple mobile device platforms. These same mobile design patterns provide the guidance for Oracle teams to develop Fusion Mobile expenses. Application developers can use Oracle’s mobile design patterns to design iPhone, Android, or browser-based smartphone applications. We are sharing our mobile design patterns and their baked-in, scientifically proven usability to enable Oracle customers and partners to build mobile applications quickly.A different way of thinking and designing. Lynn Rampoldi-Hnilo, Senior Manager of Mobile User Experiences for Apps UX, says mobile design has to be compelling. “It needs to be optimized for the device, and be visually rich and simple,” she said. “What is really key is that you are designing for a user’s most personal device, the device that they will have with them at all times of the day.”Katy Massucco, director of the overall design patterns site, said: “You need to start with a simplified task flow. Everything should be a natural interaction. The action should be relevant and leveraging the device. It should be seamless.”She suggests that developers identify the essential tasks that a user would want to do while mobile. “They need to understand the user and the context,” she added. ?A sample inline action design patternWhat people are sayingReactions to the release of the design patterns have been positive. Debra Lilley, Oracle ACE Director and Fusion User Experience Advocate (FXA), has already demo’ed Fusion Mobile Expenses widely.  Fellow Oracle Ace Director Ronald van Luttikhuizen, called it a “cool demo by @debralilley of the new mobile expenses app.” FXA member Floyd Teter says he is already cooking up some plans for using mobile design patterns.  We hope to see those ideas at Collaborate or ODTUG in 2013. For another perspective on why user experience is such an important focus for mobile applications, check out this video by John King, Director, and Monty Latiolais, President, both from ODTUG, or the Oracle Development Tools User Group.In a separate interview by e-mail, Latiolais wrote: “I enjoy the fact we can take something that, in the past, has been largely subjective, and now apply to it a scientifically proven look and feel. Trusting Oracle’s UX Design Patterns, the presentation really can become one less thing to worry about. As someone with limited ADF experience, that is extremely beneficial.”?King, who was also interviewed by e-mail, wrote: “User Experience is about making the task at hand as easy and error-free as possible. Oracle's UX labs worked hard to make the User Experience in the new Fusion Applications as good as possible; ADF makes adding tested, consistent, user experiences a declarative exercise by leveraging that work. As we move applications onto mobile platforms, user experience is the driving factor. Customers are "spoiled" by a bevy of fantastic applications, and ours cannot disappoint them. Creating applications that enable users to quickly and effectively accomplish whatever task is at hand takes thought and practice. Developers must become ’power users’ and then create applications that they and their users will love.”

    Read the article

  • When row estimation goes wrong

    - by Dave Ballantyne
    Whilst working at a client site, I hit upon one of those issues that you are not sure if that this is something entirely new or a bug or a gap in your knowledge. The client had a large query that needed optimizing.  The query itself looked pretty good, no udfs, UNION ALL were used rather than UNION, most of the predicates were sargable other than one or two minor ones.  There were a few extra joins that could be eradicated and having fixed up the query I then started to dive into the plan. I could see all manor of spills in the hash joins and the sort operations,  these are caused when SQL Server has not reserved enough memory and has to write to tempdb.  A VERY expensive operation that is generally avoidable.  These, however, are a symptom of a bad row estimation somewhere else, and when that bad estimation is combined with other estimation errors, chaos can ensue. Working my way back down the plan, I found the cause, and the more I thought about it the more i came convinced that the optimizer could be making a much more intelligent choice. First step is to reproduce and I was able to simplify the query down a single join between two tables, Product and ProductStatus,  from a business point of view, quite fundamental, find the status of particular products to show if ‘active’ ,’inactive’ or whatever. The query itself couldn’t be any simpler The estimated plan looked like this: Ignore the “!” warning which is a missing index, but notice that Products has 27,984 rows and the join outputs 14,000. The actual plan shows how bad that estimation of 14,000 is : So every row in Products has a corresponding row in ProductStatus.  This is unsurprising, in fact it is guaranteed,  there is a trusted FK relationship between the two columns.  There is no way that the actual output of the join can be different from the input. The optimizer is already partly aware of the foreign key meta data, and that can be seen in the simplifiction stage. If we drop the Description column from the query: the join to ProductStatus is optimized out. It serves no purpose to the query, there is no data required from the table and the optimizer knows that the FK will guarantee that a matching row will exist so it has been removed. Surely the same should be applied to the row estimations in the initial example, right ?  If you think so, please upvote this connect item. So what are our options in fixing this error ? Simply changing the join to a left join will cause the optimizer to think that we could allow the rows not to exist. or a subselect would also work However, this is a client site, Im not able to change each and every query where this join takes place but there is a more global switch that will fix this error,  TraceFlag 2301. This is described as, perhaps loosely, “Enable advanced decision support optimizations”. We can test this on the original query in isolation by using the “QueryTraceOn” option and lo and behold our estimated plan now has the ‘correct’ estimation. Many thanks goes to Paul White (b|t) for his help and keeping me sane through this

    Read the article

  • PowerShell Script To Find Where SharePoint 2007 Features Are Activated

    - by Brian T. Jackett
    Recently I posted a script to find where SharePoint 2010 Features Are Activated.  I built the original version to use SharePoint 2010 PowerShell commandlets as that saved me a number of steps for filtering and gathering features at each level.  If there was ever demand for a 2007 version I could modify the script to handle that by using the object model instead of commandlets.  Just the other week a fellow SharePoint PFE Jason Gallicchio had a customer asking about a version for SharePoint 2007.  With a little bit of work I was able to convert the script to work against SharePoint 2007.   Solution    Below is the converted script that works against a SharePoint 2007 farm.  Note: There appears to be a bug with the 2007 version that does not give accurate results against a SharePoint 2010 farm.  I ran the 2007 version against a 2010 farm and got fewer results than my 2010 version of the script.  Discussing with some fellow PFEs I think the discrepancy may be due to sandboxed features, a new concept in SharePoint 2010.  I have not had enough time to test or confirm.  For the time being only use the 2007 version script against SharePoint 2007 farms and the 2010 version against SharePoint 2010 farms.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first. 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(     [Parameter(position = 1, valueFromPipeline=$true)]     [string]     $Identity )#end param     Begin     {         # load SharePoint assembly to access object model         [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")             # declare empty array to hold results. Will add custom member for Url to show where activated at on objects returned from Get-SPFeature.         $results = @()                 $params = @{}     }     Process     {         if([string]::IsNullOrEmpty($Identity) -eq $false)         {             $params = @{Identity = $Identity}         }                 # create hashtable of farm features to lookup definition ids later         $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local                         # check farm features         $results += ($farm.FeatureDefinitions | Where-Object {$_.Scope -eq "Farm"} | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value ([string]::Empty) -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                 # check web application features         $contentWebAppServices = $farm.services | ? {$_.typename -like "Windows SharePoint Services Web Application"}                 foreach($webApp in $contentWebAppServices.WebApplications)         {             $results += ($webApp.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $webApp.GetResponseUri(0).AbsoluteUri -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                         # check site collection features in current web app             foreach($site in ($webApp.Sites))             {                 $results += ($site.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                  % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $site.Url -PassThru} |                                  Select-Object -Property Scope, DisplayName, Id, Url)                                 # check site features in current site collection                 foreach($web in ($site.AllWebs))                 {                     $results += ($web.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                      % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $web.Url -PassThru} |                                      Select-Object -Property Scope, DisplayName, Id, Url)                                                        $web.Dispose()                 }                 $site.Dispose()             }         }     }     End     {         $results     } } #end Get-SPFeatureActivated Get-SPFeatureActivated   Conclusion    I have posted this script to the TechNet Script Repository (click here).  As always I appreciate any feedback on scripts.  If anyone is motivated to run this 2007 version script against a SharePoint 2010 to see if they find any differences in number of features reported versus what they get with the 2010 version script I’d love to hear from you.         -Frog Out

    Read the article

  • The Growing Importance of Network Virtualization

    - by user12608550
    The Growing Importance of Network Virtualization We often focus on server virtualization when we discuss cloud computing, but just as often we neglect to consider some of the critical implications of that technology. The ability to create virtual environments (or VEs [1]) means that we can create, destroy, activate and deactivate, and more importantly, MOVE them around within the cloud infrastructure. This elasticity and mobility has profound implications for how network services are defined, managed, and used to provide cloud services. It's not just servers that benefit from virtualization, it's the network as well. Network virtualization is becoming a hot topic, and not just for discussion but for companies like Oracle and others who have recently acquired net virtualization companies [2,3]. But even before this topic became so prominent, Solaris engineers were working on technologies in Solaris 11 to virtualize network services, known as Project Crossbow [4]. And why is network virtualization so important? Because old assumptions about network devices, topology, and management must be re-examined in light of the self-service, elasticity, and resource sharing requirements of cloud computing infrastructures. Static, hierarchical network designs, and inter-system traffic flows, need to be reconsidered and quite likely re-architected to take advantage of new features like virtual NICs and switches, bandwidth control, load balancing, and traffic isolation. For example, traditional multi-tier Web services (Web server, App server, DB server) that share net traffic over Ethernet wires can now be virtualized and hosted on shared-resource systems that communicate within a larger server at system bus speeds, increasing performance and reducing wired network traffic. And virtualized traffic flows can be monitored and adjusted as needed to optimize network performance for dynamically changing cloud workloads. Additionally, as VEs come and go and move around in the cloud, static network configuration methods cannot easily accommodate the routing and addressing flexibility that VE mobility implies; virtualizing the network itself is a requirement. Oracle Solaris 11 [5] includes key network virtualization technologies needed to implement cloud computing infrastructures. It includes features for the creation and management of virtual NICs and switches, and for the allocation and control of the traffic flows among VEs [6]. Additionally it allows for both sharing and dedication of hardware components to network tasks, such as allocating specific CPUs and vNICs to VEs, and even protocol-specific management of traffic. So, have a look at your current network topology and management practices in view of evolving cloud computing technologies. And don't simply duplicate the physical architecture of servers and connections in a virtualized environment…rethink the traffic flows among VEs and how they can be optimized using Oracle Solaris 11 and other Oracle products and services. [1] I use the term "virtual environment" or VE here instead of the more commonly used "virtual machine" or VM, because not all virtualized operating system environments are full OS kernels under the control of a hypervisor…in other words, not all VEs are VMs. In particular, VEs include Oracle Solaris zones, as well as SPARC VMs (previously called LDoms), and x86-based Solaris and Linux VMs running under hypervisors such as OEL, Xen, KVM, or VMware. [2] Oracle follows VMware into network virtualization space with Xsigo purchase; http://www.mercurynews.com/business/ci_21191001/oracle-follows-vmware-into-network-virtualization-space-xsigo [3] Oracle Buys Xsigo; http://www.oracle.com/us/corporate/press/1721421 [4] Oracle Solaris 11 Networking Virtualization Technology, http://www.oracle.com/technetwork/server-storage/solaris11/technologies/networkvirtualization-312278.html [5] Oracle Solaris 11; http://www.oracle.com/us/products/servers-storage/solaris/solaris11/overview/index.html [6] For example, the Solaris 11 'dladm' command can be used to limit the bandwidth of a virtual NIC, as follows: dladm create-vnic -l net0 -p maxbw=100M vnic0

    Read the article

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • Oracle Executive Strategy Brief: Enterprise-Grade Cloud Applications

    - by B Shashikumar
    Cloud Computing has clearly evolved into one of the dominant secular trends in the industry. Organizations are looking to the cloud to change how they buy and consume IT. And its no longer about just lower up-front costs. The cloud promises to deliver greater agility and free up resources to focus on innovation versus running and maintaining systems. But are organizations actually realizing these benefits? The full promise of cloud is not being realized by customers who entrust their business to multiple niche cloud providers. While almost 9 out of 10 companies  expect more IT agility with cloud, only 47% are actually getting it (Source: 2011 State of Cloud Survey by Symantec). These niche cloud customers have also seen the promises of lower costs, efficiency gains, improved security, and compliance go unfulfilled. Having one cloud provider for customer relationship management (CRM) and another for human capital management (HCM), and then trying to glue these proprietary systems together while integrating to a back-office financial system can add to complexity and long-term costs. Completing a business process or generating an integrated report is cumbersome, and leverages incomplete data. Why can’t niche cloud providers deliver on the full promise of cloud? It’s simple: you still need to complete business processes. You still need reporting that enables you to take action using data from multiple systems. You still have to comply with SOX and other industry regulations. These requirements don’t go away just because you deploy in the cloud. Delivering lower up-front costs by enabling customers to buy software as a service (SaaS) is the easy part. To get real value that lasts longer than your quarterly report, it’s important to realize the benefits of cloud without compromising on functionality and while having the right level of control and flexibility. This is the true promise of cloud. Oracle’s cloud strategy centers around delivering the benefits of cloud—without compromise. We uniquely empower our customers with complete solutions and choice. From the richest functionality to integrated reporting and great user experience. It’s all available in the cloud. And it works not just with other Oracle cloud applications, but with your existing Oracle and third-party systems as well. This helps protect your current investments and extend their value as you journey to the cloud. We’ve made the necessary investments not only in our applications but also in the underlying technology that makes it all run—from the platform down to the hardware and operating system. We make it all. And we’ve engineered it to work together and be highly optimized for our customers, in the cloud. With Oracle enterprise-grade cloud applications, you get the benefits of cloud plus more power, more choice, and more confidence. Read more about how you can realize the true advantage of Cloud with Oracle Enterprise-grade Cloud applications in the Oracle Executive Strategy Brief here.  You can also attend an Oracle Cloud Conference event at a city near you. Register here. 

    Read the article

  • Is this over-abstraction? (And is there a name for it?)

    - by mwhite
    I work on a large Django application that uses CouchDB as a database and couchdbkit for mapping CouchDB documents to objects in Python, similar to Django's default ORM. It has dozens of model classes and a hundred or two CouchDB views. The application allows users to register a "domain", which gives them a unique URL containing the domain name that gives them access to a project whose data has no overlap with the data of other domains. Each document that is part of a domain has its domain property set to that domain's name. As far as relationships between the documents go, all domains are effectively mutually exclusive subsets of the data, except for a few edge cases (some users can be members of more than one domain, and there are some administrative reports that include all domains, etc.). The code is full of explicit references to the domain name, and I'm wondering if it would be worth the added complexity to abstract this out. I'd also like to know if there's a name for the sort of bound property approach I'm taking here. Basically, I have something like this in mind: Before in models.py class User(Document): domain = StringProperty() class Group(Document): domain = StringProperty() name = StringProperty() user_ids = StringListProperty() # method that returns related document set def users(self): return [User.get(id) for id in self.user_ids] # method that queries a couch view optimized for a specific lookup @classmethod def by_name(cls, domain, name): # the view method is provided by couchdbkit and handles # wrapping json CouchDB results as Python objects, and # can take various parameters modifying behavior return cls.view('groups/by_name', key=[domain, name]) # method that creates a related document def get_new_user(self): user = User(domain=self.domain) user.save() self.user_ids.append(user._id) return user in views.py: from models import User, Group # there are tons of views like this, (request, domain, ...) def create_new_user_in_group(request, domain, group_name): group = Group.by_name(domain, group_name)[0] user = User(domain=domain) user.save() group.user_ids.append(user._id) group.save() in group/by_name/map.js: function (doc) { if (doc.doc_type == "Group") { emit([doc.domain, doc.name], null); } } After models.py class DomainDocument(Document): domain = StringProperty() @classmethod def domain_view(cls, *args, **kwargs): kwargs['key'] = [cls.domain.default] + kwargs['key'] return super(DomainDocument, cls).view(*args, **kwargs) @classmethod def get(cls, *args, **kwargs, validate_domain=True): ret = super(DomainDocument, cls).get(*args, **kwargs) if validate_domain and ret.domain != cls.domain.default: raise Exception() return ret def models(self): # a mapping of all models in the application. accessing one returns the equivalent of class BoundUser(User): domain = StringProperty(default=self.domain) class User(DomainDocument): pass class Group(DomainDocument): name = StringProperty() user_ids = StringListProperty() def users(self): return [self.models.User.get(id) for id in self.user_ids] @classmethod def by_name(cls, name): return cls.domain_view('groups/by_name', key=[name]) def get_new_user(self): user = self.models.User() user.save() views.py @domain_view # decorator that sets request.models to the same sort of object that is returned by DomainDocument.models and removes the domain argument from the URL router def create_new_user_in_group(request, group_name): group = request.models.Group.by_name(group_name) user = request.models.User() user.save() group.user_ids.append(user._id) group.save() (Might be better to leave the abstraction leaky here in order to avoid having to deal with a couchapp-style //! include of a wrapper for emit that prepends doc.domain to the key or some other similar solution.) function (doc) { if (doc.doc_type == "Group") { emit([doc.name], null); } } Pros and Cons So what are the pros and cons of this? Pros: DRYer prevents you from creating related documents but forgetting to set the domain. prevents you from accidentally writing a django view - couch view execution path that leads to a security breach doesn't prevent you from accessing underlying self.domain and normal Document.view() method potentially gets rid of the need for a lot of sanity checks verifying whether two documents whose domains we expect to be equal are. Cons: adds some complexity hides what's really happening requires no model modules to have classes with the same name, or you would need to add sub-attributes to self.models for modules. However, requiring project-wide unique class names for models should actually be fine because they correspond to the doc_type property couchdbkit uses to decide which class to instantiate them as, which should be unique. removes explicit dependency documentation (from group.models import Group)

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • Rendering a UIWebView in drawRect with loadHTMLString

    - by Nick Weaver
    Hello there, I am having a problem with UIWebView. I'd like to render my own html code in it. When I add a webview as a subview and put in some html code it renders just fine. When it gets down to some optimized drawing of tableview cell with the drawRect method the problem pops up. Drawing UIView descendants works pretty well this way. It's even possible to load a URL with the loadRequest method, setting the delegate, conforming to the UIWebViewDelegate protocol and redrawing the table cell with setNeedsDisplay when webViewDidFinishLoad is called. It does show, but when it comes to loadHTMLString, nothing shows up, only a white rect. Due to performance reasons I have to do the drawing in the drawRect method. Any ideas? Thanks in advance Nick Example snippet code for the html code being loaded by a UIWebView: NSString *html = @"<html><head><title>My fancy webview</title></head><body style='background-color:green;'><p>It somehow seems<h2 style='color:black;'>this does not show up in drawRect</h2>!</p></body></html>"; [webView loadHTMLString:html baseURL:nil]; Snippet for the drawRect method: - (void)drawRect:(CGRect)aRect { CGContextRef context = UIGraphicsGetCurrentContext(); [[webView layer] renderInContext:context]; }

    Read the article

  • MP3 Decoding on Android

    - by Rob Szumlakowski
    Hi. We're implementing a program for Android phones that plays audio streamed from the internet. Here's approximately what we do: Download a custom encrypted format. Decrypt to get chunks of regular MP3 data. Decode MP3 data to raw PCM data in a memory buffer. Pipe the raw PCM data to an AudioTrack Our target devices so far are Droid and Nexus One. Everything works great on Nexus One, but the MP3 decode is too slow on Droid. The audio playback starts to skip if we put the Droid under load. We are not permitted to decode the MP3 data to SD card, but I know that's not our problem anyways. We didn't write our own MP3 decoder, but used MPADEC (http://sourceforge.net/projects/mpadec/). It's free and was easy to integrate with our program. We compile it with the NDK. After exhaustive analysis with various profiling tools, we're convinced that it's this decoder that is falling behind. Here's the options we're thinking about: Find another MP3 decoder that we can compile with the Android NDK. This MP3 decoder would have to be either optimized to run on mobile ARM devices or maybe use integer-only math or some other optimizations to increase performance. Since the built-in Android MediaPlayer service will take URLs, we might be able to implement a tiny HTTP server in our program and serve the MediaPlayer with the decrypted MP3s. That way we can take advantage of the built-in MP3 decoder. Get access to the built-in MP3 decoder through the NDK. I don't know if this is possible. Does anyone have any suggestions on what we can do to speed up our MP3 decoding? -- Rob Sz

    Read the article

  • How to require fullscreen mode in a jQTouch application?

    - by Christopher Young
    I'm using jQTouch to develop a version of a website optimized for safari on the iphone. The jQTouch demo helpfully shows how to show an "install this" message for users not using full screen mode and hide it for those who are. When in fullscreen mode, the body should have the class "fullscreen." So you can hide the "install this" message for people who have already added your app to their home page by adding this css rule to your stylesheet: body.fullscreen #home .info { display: none; } What I'd like to do is require users to use the app in fullscreen mode only. When viewed from the regular browser, they should only see a message asking them to install the app. That message should of course be hidden otherwise. This ought to be really, really easy, so I must just be missing something obvious. I thought one way to do this would be to simply test for the class "fullscreen" on the body: if it's not there, use goTo to get to another div, or hide the other divs, or something like that. Strangely, however, this doesn't work. As a test, I've still got the original "info" message, as in the jQTouch demo, and it doesn't show up when I launch in fullscreen mode. So the body must have the fullscreen class. And yet I can't find any other trace of it: when I put this alert to test things after the document has loaded, I get nothing when launching in fullscreen mode: alert($("body").attr("class")); I also thought I might test for fullscreen mode by checking for the value of the fullScreen boolean. But this doesn't seem to work either. What am I missing? What is the best way to do this?

    Read the article

  • SIGABRT error when running on iPad

    - by user324881
    Hello, all. I've been banging my head for a few hours because of this problem. I have a universal project that's a mix of iPhone and iPad projects. I put these codebases together into the universal project and, after a lot of "#if __IPHONE_OS_VERSION_MIN_REQUIRED >= 30200" checks, got the project to run in both the iPhone (OS 3.0 to 3.1.3) and iPad simulators. After doing a more finagling with the project settings of external libraries that I load, I got the app to load on an iPhone (which runs OS 3.1.3). However, when I run the app on my iPad, I get an immediate SIGABRT error. I've tried running it under Debug, under Release, with Active Architecture of both armv6 and armv7. I've checked and double-checked that the app has the right nib files set up (but, again, this app runs fine in the simulator). I've gone through the external libraries I'm using and set them up to have the same base SDK (3.2), same architectures (Optimized (armv6 armv7)), the same targeted device family (iPhone/iPad), and the same iPhone OS deployment target (iPhone OS 3.0). So, to summarize... I have a universal app that works in the simulator for iPhone and iPad, runs on an actual iPhone, but doesn't run on an iPad. It doesn't get far on the iPad -- there's an immediate SIGABRT error that stops execution. Help??

    Read the article

  • Using JavaScript/jQuery to return a list of CSS selectors based on highlighted text

    - by Bungle
    I've been given some project requirements that involve (ideally) returning a list of CSS selectors based on highlighted text. In other words, a user could do something like this on a page: Click a button to indicate that their next text selection should be recorded. Highlight some text on the page. See a generated list of CSS selectors that correspond to all the elements that contain the highlighted text. Firstly, does this seem like a feasible goal? jQuery makes it easy to use a selector to access a particular element, but I'm not sure if the reverse holds true. If an element lacks an id attribute, I also don't know how you'd return an "optimized" selector - i.e., one that identifies an element uniquely. Maybe crawl up the DOM until you find an ID, then stem the selector from there? Secondly, from a high-level perspective, any ideas on how to go about this? Any tips or tricks that could speed development? I very much appreciate any help. Thanks!

    Read the article

  • ASP.NET MVC GoogleBot Issues

    - by Khalid Abuhakmeh
    I wrote a site using ASP.NET MVC, and although it is not completely SEO optimized at this point I figured it is a good start. What I'm finding is that when I use Google's Webmaster Tools to fetch my site (to see what a GoogleBot sees) it sees this. HTTP/1.1 200 OK Cache-Control: public, max-age=1148 Content-Type: application/xhtml+xml; charset=utf-8 Expires: Mon, 18 Jan 2010 18:47:35 GMT Last-Modified: Mon, 18 Jan 2010 17:07:35 GMT Vary: * Server: Microsoft-IIS/7.0 X-AspNetMvc-Version: 2.0 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Mon, 18 Jan 2010 18:28:26 GMT Content-Length: 254 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title> Index </title> </head> <body> </body> </html> Obviously this is not what my site looks like. I have no clue where Google is getting that HTML from. Anybody have an answer and a solution? Anybody experience the same issues? Thanks in advance.

    Read the article

  • How to Choose Fields While Using ExportToExcel in jqGrid ?

    - by João Guilherme
    Hi ! I have this jqGrid <trirand:JQGrid runat="server" ID="JQGrid1" OnRowEditing="JQGrid1_RowEditing" RenderingMode="Optimized" oncellbinding="JQGrid1_CellBinding" Height="350" EditUrl="/Ferramenta/Transacoes/TransacoesT.aspx"> <AppearanceSettings HighlightRowsOnHover="true"/> <Columns> <trirand:JQGridColumn DataField="IdLancamento" PrimaryKey="True" Visible="false" /> <trirand:JQGridColumn DataField="IdCategoria" Visible="false" /> <trirand:JQGridColumn DataField="DataLancamento" Editable="true" DataFormatString="{0:dd/MM/yy}" HeaderText="Data" Width="65" TextAlign="Center" CssClass="font_data" /> <trirand:JQGridColumn DataField="Descricao" Editable="true" HeaderText="Descrição" Width="330" /> <trirand:JQGridColumn DataField="NomeCategoria" Editable="true" EditType="DropDown" EditorControlID="ddlCategorias" HeaderText="Categoria"> <Formatter> <trirand:CustomFormatter FormatFunction="DefineUrl" /> </Formatter> </trirand:JQGridColumn> <trirand:JQGridColumn DataField="Valor" Editable="false" DataFormatString="{0:C}" HeaderText="Valor" Width="80" TextAlign="Center" /> </Columns> <ClientSideEvents RowSelect="editRow" /> <PagerSettings PageSize="20" /> <ToolBarSettings ShowEditButton="false" ShowRefreshButton="True" ShowAddButton="false" ShowDeleteButton="false" ShowSearchButton="false" /> <SortSettings InitialSortColumn=""></SortSettings> </trirand:JQGrid> <asp:LinkButton ID="lbExportar" runat="server" onclick="lbExportar_Click">Exportar todas as transações</asp:LinkButton> When I use the method ExportToExcel JQGrid1.ExportToExcel("export.xls"); it includes the first column IdLancamento that is not visible and also includes another column that is used on the query. Is it possible to choose the columns that are going to be exported ?

    Read the article

  • Fastest way to remove non-numeric characters from a VARCHAR in SQL Server

    - by Dan Herbert
    I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is slow and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: http://stackoverflow.com/questions/52315/t-sql-trim-nbsp-and-other-non-alphanumeric-characters But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform fast. Update Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is.

    Read the article

  • Custom HTTPHandler causing caching or session issues?

    - by Jan de Jager
    So i have a custom CMS running under .Net 3.5 written entirely in c#. The engine is optimized to render for mobile devices, but also server to normal web browsers. It also supports cookieless sessions. Great... I've chosen not to cache anything (including browser data) in order to control the rendering completely from data. This has been all good until lately. The engine implements a basic login function that simply logs the user state within a session object. The behavior is rather strange. User will click through the site no problem. Then login. The login will either go through successfully or just redisplay the login screen, suggesting a cached page being returned or redisplayed... If the login is successful the concurrent page hits will switch arbitrarily between logged in and logged out state... Also suggesting either the session state is not accessible or a cached page being returned. I have debugged the hell out of the thing.... including using fiddler and the like. When debugging the behavior disappears. Huh? One of the sites running on the engine is http://www.wiseguy.mobi (sorry customized for South Africa, so you'll probably not be able to get the password Text Message)!

    Read the article

  • retrieve data from multiple tables referencing some tables in mysql

    - by I Like PHP
    i have 10 tables have innoDB engine 1. one is state_table which attributes are state_id and state_name 2. another table city_table which attributes are city_id and city_name 3. one more table permit_table which attribute is p_id above city_id,state_id and permit_id is references to rest of 7 tables. each table having state_id, city_id and permit_id referencing above tables now i want to extract all tables data with their respective city name and state name ( each tables may have different city id and state id) i m using below mysql query( i know it's very length way.... ) . please tell me how to do it with optimized method? SELECT p.*,cp.city_name,sp.state_name, o.*,co.city_name,so.state_name, t.*,ct.city_name,st.state_name, th.*,cth.city_name,sth.state_name, f.*,cf.city_name,sf.state_name .......so on................ .......so on................ ............................ FROM permit_table p JOIN table_city cp ON cp.city_id=p.city_id JOIN table_state sp ON sp.state_id=p.state_id JOIN table_one o ON o.permit_id=p.permit_id JOIN table_city co ON co.city_id=o.city_id JOIN table_state so ON so.state_id=o.state_id JOIN table_two t ON t.permit_id=p.permit_id JOIN table_city ct ON ct.city_id=t.city_id JOIN table_state st ON st.state_id=t.state_id JOIN table_three th ON th.permit_id=p.permit_id JOIN table_city cth ON cth.city_id=th.city_id JOIN table_state sth ON sth.state_id=th.state_id JOIN table_four f ON f.permit_id=p.permit_id JOIN table_city cf ON cf.city_id=f.city_id JOIN table_state sf ON sf.state_id=f.state_id ................so on......................... ................so on......................... .............................................. WHERE p.permit_id=base64_encode(mysql_real_escape_string($_GET[pid])); Thanks For help me always.

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >