Search Results

Search found 93816 results on 3753 pages for 'nexus one'.

Page 511/3753 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • Scribe Workbench Update Source

    - by Dave Noderer
    I’ve been working with a program Scribe, similar in function to SSIS although I’m still an SSIS fanboy!! The main feature my customer has Scribe for is to load data into Microsoft CRM 4.0. A lot of what I’ve been doing is loading campaigns into CRM which are staged in SQL Server but I need to mark each one as imported so it will not get imported again. The screen shot below shows how to setup the Source update. One important thing is that the source SQL table has to have a primary key defined (this was a staging table and I did not do that at first)

    Read the article

  • Recap: Oracle at the Gartner Business Intelligence Summit

    - by kimberly.billings
    Getting to Vegas was no fun. As anyone who lives in the Bay Area knows, the SF airport shuts down one runway when it rains, causing major havoc. So rain, rain, rain on Sunday meant delay, delay, delay at the airport. Needless to say, my 6:30 pm flight didn't land in Vegas until 3:00 am! But the travel pains were worth it. There was a lot to be learned at the Gartner BI Summit this year, and the uptick in attendance was reflected in strong booth traffic and engaging conversations in the Oracle booth. Oracle customer, Dawn Conant, Director, Business Intelligence at Beckman Coulter, generated a lot of interest in her presentation about migrating from Business Objects to Oracle Business Intelligence, Enterprise Edition with Oracle Database 11g. Dawn's story was a very relatable one, as many of the attendees had plans for similar projects. One of the most interesting Gartner-led sessions compared BI/DW megavendors, IBM, Oracle, SAP and Microsoft. According to Gartner analyst Rita Sallam, these megavendors control about two-thirds of the BI market. Sallem attributes this in part to the fact that organizations are expanding their definitions of BI to also include analytics and performance management. In doing so, they require greater integration of BI applications with a broader set of applications and middleware. In a related session, a panel of Gartner analysts compared the Magic Quadrants for BI Platforms; CPM; Data Quality; Data Integration Tools; and Data Warehouses. Oracle is a leader in all of the Magic Quadrants in which it participates and has the most complete stack including hardware and software, according to Donald Feinberg. Feinberg also commented that in situations with VLDW and solid mixed workloads, Oracle Exadata is making a big difference! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Vote of Disconfidence to Entity Framework

    - by Ricardo Peres
    A friend of mine has found the following problem with Entity Framework 4: Two simple classes and one association between them (one to many): One condition to filter out soft-deleted entities (WHERE Deleted = 0): 100 records in the database; A simple query: 1: var l = ctx.Person.Include("Address").Where(x => (x.Address.Name == "317 Oak Blvd." && x.Address.Number == 926) || (x.Address.Name == "891 White Milton Drive" && x.Address.Number == 497)); Will produce the following SQL: 1: SELECT 2: [Extent1].[Id] AS [Id], 3: [Extent1].[FullName] AS [FullName], 4: [Extent1].[AddressId] AS [AddressId], 5: [Extent202].[Id] AS [Id1], 6: [Extent202].[Name] AS [Name], 7: [Extent202].[Number] AS [Number] 8: FROM [dbo].[Person] AS [Extent1] 9: LEFT OUTER JOIN [dbo].[Address] AS [Extent2] ON ([Extent2].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent2].[Id]) 10: LEFT OUTER JOIN [dbo].[Address] AS [Extent3] ON ([Extent3].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent3].[Id]) 11: LEFT OUTER JOIN [dbo].[Address] AS [Extent4] ON ([Extent4].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent4].[Id]) 12: LEFT OUTER JOIN [dbo].[Address] AS [Extent5] ON ([Extent5].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent5].[Id]) 13: LEFT OUTER JOIN [dbo].[Address] AS [Extent6] ON ([Extent6].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent6].[Id]) 14: ... 15: WHERE ((N'317 Oak Blvd.' = [Extent2].[Name]) AND (926 = [Extent3].[Number])) 16: ... And will result in 680 MB of memory being taken! Now, Entity Framework has been historically known for producing less than optimal SQL, but 680 MB for 100 entities?! According to Microsoft, the problem will be addressed in the following version, there is a Connect issue open. There is even a whitepaper, Performance Considerations for Entity Framework 5, which talks about some of the changes and optimizations coming on version 5, but by reading it, I got even more concerned: “Once the cache contains a set number of entries (800), we start a timer that periodically (once-per-minute) sweeps the cache.” Say what?! The next version of Entity Framework will spawn timer threads?! When Code First came along, I thought it was a step in the right direction. Sure, it didn’t include some things that NHibernate did for quite some time – for example, different strategies for Id generation that do not rely on IDENTITY columns, which makes INSERT batching impossible, or support for enumerated types – but I thought these would come with the time. Now, enumerated types have, but so did… timer threads! I’m afraid Entity Framework is becoming a monster.

    Read the article

  • IDC Analyst Report Touts Oracle–Accenture Strategic Initiative

    - by kristin.jellison
    Hi there, partners! Oracle Engineered Systems have been getting some love lately, and we want to share it with you! The market intelligence and advisory firm IDC recently released a report lauding Oracle and Accenture’s strategic initiative to route the performance and flexibility of Oracle Engineered Systems to clients. The report, "Oracle and Accenture Strategic Alliance Places Big Bet on Engineered Systems,” by Steve White, reflects a largely positive analysis of the relationship. White notes that the alliance is “one of the largest in the industry.” Under the relationship, Accenture has incorporated Oracle Engineered Systems—including Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle SuperCluster, and Oracle Exalytics In-Memory Machine—into its leading datacenter transformation consulting services. Together, the two companies have also created bespoke platforms, such as the Accenture Foundation Platform for Oracle, which helps clients accelerate deployments on Oracle Fusion Middleware, running Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine. Oracle Engineered Systems deliver a single, engineered platform—including server to storage and networking. This makes it easier and cheaper for Accenture clients around the world to prepare their datacenters for managing, processing and analyzing the massive amounts of data they (rightly) anticipate seeing in the next decade. The new solutions can help reduce the effort and cost to migrate any vendor database to an Oracle Engineered Systems platform, which can lower the cost of ownership by up to 50 percent. For its part, Accenture has built a team of 300 consultants to implement and increase the flexibility and stability of client datacenters. This move further expands one of the fastest-growing full-service Oracle Enterprise solutions. Over 52,000 Accenture consultants are qualified to implement, upgrade and outsource the Oracle product suite. Accenture is a Diamond-level member of Oracle PartnerNetwork (OPN). For Oracle Partners, this update should give you at least two things to walk away with. First, this initiative is showing signs of success. As Marty Cole, group chief executive for Accenture’s Technology growth platform, put it, “We are seeing an increasing number of clients recognizing the value of consolidating their databases and taking advantage of the cost and performance benefits delivered by these solutions.” The pipeline is there—and not just for Accenture. Use this example to show your clients that investments in Oracle Engineered Systems are on the rise. Second, recognize that Oracle Engineered Systems represent one of the biggest platforms for growth that Oracle has to offer partners. As part of the agreement, Accenture is able to provide: Platform Readiness Assessments Platform Implementation App Rationalization Database Rationalization Managed Services These are all enablement opportunities you can offer customers under Oracle’s partner programs —to continue building the value of their investments, and the value of your relationship with Oracle. Take a read through the IDC report. To learn more about the partnership, see this press release. Happy selling! The OPN Communications Team

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • Designing for mobile (aka designing for everything)

    - by ihaynes
    Last Saturday I went to 'Developer Developer Developer 10' on the Microsoft campus at Reading (UK). This is one of a series of regular events put on by the developer community for the developer community. The guys who organise these events put in a huge amount of time and effort into them and they are well worth getting to if you can.I enjoy these events because there's always something currently relevant but also I can get an insight into things I don't normally come across or work with.Having said that, it's web related things that always grab my attention and this year one of my favourite speakers, George Adamson, gave a session on 'Designing for mobile (aka designing for everything)'. This is a subject close to my heart and I've tried to put the argument forward myself on http://www.ew-resource.co.uk/mobile/ but George makes a far better job of it that I can.His slideshow from the session is available on http://www.slideshare.net/george.adamson and although you won't get his unique presentation style in the static slideshow, this is well worth watching if you have the time.

    Read the article

  • Veeam giveaway

    - by marc dekeyser
    Hey everybody! As you might have noticed an extra banned has showed up on this site from veeam. Since they have decided to sponsor this blog (thank you very much all! Would not have happened without all of you!) I’ll periodically share some news from them with all of you. They are doing a big give away if you register on there site, one of them being a surface tablet, which you all know is brilliant!   Veeam is now featuring monthly prize drawings with some very exciting prizes. Entering is easy—just one entry is all that’s required for a chance to win every month until August 2013. Among the prizes, there are Microsoft Surface tablets, Apple iPads, and FREE passes to TechEd 2013 and VMworld 2013! Find out more about Veeam’s big giveaway.

    Read the article

  • SSIS Reporting Pack update

    - by jamiet
    Its been a while since I last posted anything in regard to SSIS Reporting Pack, the most recent release being on 27th May 2012, so here is a short update. There is still lots of work to do on SSIS Reporting Pack; lots more features to add, lots of performance work to be done, and a few bug fixes too. I have also been (fairly) hard at work on a framework to be used in conjunction with SSIS 2012 that I refer to as the Restart Framework (currently residing at http://ssisrestartframework.codeplex.com/). There is still much work to be done on the Restart Framework (not least some useful documentation on how to use it) which is why I haven’t mentioned it publicly before now although I am actively checking in changes. One thing I am considering is amalgamating the two projects into one; this would mean I could build a suite of reports that both work against the SSIS Catalog (what you currently know as “SSIS Reporting Pack”) and also against this Restart Framework thing. No decision has been made as yet though. There have been a number of bug reports and feature suggestions for SSIS Reporting Pack added to the Issue Tracker. Thank you to everyone that has submitted something, rest assured I am not going to ignore them forever; my time is at a premium right now unfortunately due to … well … life… so working on these items isn’t near the top of my priority list. Lastly, I am actively using SSIS Reporting Pack in a production environment right now and I’m happy to report that it is proving to be very useful. One of the reports that I have put a lot of time into is execution executable duration.rdl and its proving very adept at easily identifying bottlenecks in our SSIS 2012 executions: The report allows you to browse through the hierarchy of executables in each execution and each bar represents the duration of each executable in relation to all the other executables; longer bars being a good indication of where problems might lie. The colour of the bar indicates whether it was successful or not (green=success). Hovering over a bar brings up a tooltip showing more information about that executable. Clicking on a bar allows you to compare this particular instance of the executable against other executions. Please do let me know if you are using SSIS Reporting Pack. I would like to hear any anecdotes you might have, good or bad. @Jamiet

    Read the article

  • Getting Internet Explorer to Open Different Sets of Tabs Based on the Day of the Week

    - by Akemi Iwaya
    If you have to use Internet Explorer for work and need to open a different set of work-specific tabs every day, is there a quick and easy way to do it instead of opening each one individually? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader bobSmith1432 is looking for a quick and easy way to open different daily sets of tabs in Internet Explorer for his work: When I open Internet Explorer on different days of the week, I want different tabs to be opened automatically. I have to run different reports for work each day of the week and it takes a lot of time to open the 5-10 tabs I use to run the reports. It would be a lot faster if, when I open Internet Explorer, the tabs I needed would automatically load and be ready. Is there a way to open 5-10 different tabs in Internet Explorer depending on the day of the week? Example: Monday – 6 Accounting Pages Tuesday – 7 Billing Pages Wednesday – 5 HR Pages Thursday – 10 Schedule Pages Friday – 8 Work Summary/Order Pages Is there an easier way for Bob to get all those tabs to load and be ready to go each day instead of opening them individually every time? The Answer SuperUser contributor Julian Knight has a simple, non-script solution for us: Rather than trying the brute force method, how about a work around? Open up each set of tabs either in different windows, or one set at a time, and save all tabs to bookmark folders. Put the folders on the bookmark toolbar for ease of access. Each day, right-click on the appropriate folder and click on ‘Open in tab group’ to open all the tabs. You could put all the day folders into a top-level folder to save space if you want, but at the expense of an extra click to get to them. If you really must go further, you need to write a program or script to drive Internet Explorer. The easiest way is probably writing a PowerShell script. Special Note: There are various scripts shared on the discussion page as well, so the solution shown above is just one possibility out of many. If you love the idea of using scripts for a function like this, then make sure to browse on over to the discussion page to see the various ones SuperUser members have shared! Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • WCF or ASMX WebService

    - by karthi
    I have been asked to create a web service that communicates with Auth.NET CIM and Shipsurance API. This web service will be used by multiple applications (one a desktop and another a web application). Am confused whether to go for WCF or asmx web service . Auth.NET CIM and Shipsurance API have asmx webservices which i would be calling in my newly created web service.So is WCF the right way to Go or can i stay with asmx? Can Some one please guide. Let me know if this question is inappropriate here and needs to be moved to stackoverflow or somewhere else.

    Read the article

  • Agent versus Agentless management

    - by Owen Allen
    I got a couple of questions about Agentless asset management: "What does agentless management do for an asset?" Agentless management is one of the two ways that you can manage an operating system. Rather than installing an Agent Controller on the OS, agentless management uses SSH to regularly check the system and gather monitoring data. Many of the actions that would be available on an agent-managed system are available on an agentless system, but actions such as running reports or updating an Oracle Solaris 10 or Linux OS are not available. A table showing the capabilities of agentless management is here. "What permissions does agentless management require?" Agentless management still requires root credentials. If you can't log into the system as root, you can provide one set of credentials for the login, and then a set of root credentials to switch to.

    Read the article

  • So you want a French Site?

    - by juanlarios
    I thought I would write a quick write up of how to create a french site in SharePoint 2007. I'm not talking about a Variation but just a plain French Site from the ground up. There were some gotchas that I felt were worth blogging about. First:  go to Microsoft Telnet Article and follow the install instructions. Make sure that when you get to the download page that you select "French" as part of the drop down and you download and install the right language pack. I noticed that if you did not click the "change" button enven though I selected the 'french' language pack, it reverted back to the english language pack.   Second: You will notice a couple of things. When you go to central admin you will see the following:    Now you can pick between french site or english. You will get this if you install other language packs and they will be listed in the drop down. You will notice that you now have french headings and frech listings of sites. You see "Publishing" as a heading because I have a custom site definition that I deployed as a french site. Third: As you start navigating around and trying to create document libraries or sites you will start getting errors. Errors like the following: "Cannot make a cache safe URL for "SelectorControls.js", file not found. Please verify that the file exists under the layouts directory. " Troubleshoot issues with Windows SharePoint Services. Once you resolve the issue with this "js" file, you will find that there are other js files that are missing. The only problem is that if you are not fluent in French or the language you are trying to deploy, Well, you'll have a tough time understanding error messages as they will all be in the new language you are trying to deploy. So let's just talk about what happened when you installed the language pack. In the 12 Hive:  12/Template    you will now see a 1033 folder and a 1036 folder. The 1036 folder is the folder that was created and added as part of the language pack. What the above error is saying is that now that it's looking at the 1036 folder, well, it's missing some files. The nice thing is that these files are included in the 1033 folder (which is the English Language Pack). Simply copy and paste the controls from the one folder to the other. There will be more than one conflict so you will have to move serveral controls over. Can't remember how many but simply add them as error messages come up. I had to add some navigation controls and some content selectors.   Now that's all that you need to install the Frech Language pack anc reate site collections that are entirely in a another language. Do not mistake this with Variations, where you can have multiple language sites. For those of you doing a little bit extra with this, let me share what I was doing extra and what I needed to get it working for me. I had had a custom site definition which was obviously not showing up in my selection of french sites. I was under the impression that all sites in English would show up in french and that the sites were simply routed to a new Resource file for french content. And that is the case but there is a little extra that needs to be done if you have a custom site definition deployed:  First: Under hive 12/Template/1033/XML  there is a listing of site definition files that are deployed to the English side of things. If you navigate to 12/Template/1036/XML  and open one of the site definitions you will see that they are similar and reference the existing site definitions installed on the server, except that they have some french added to descriptions and names. Simply copy the xml file of your custom template to the 1036 folder to have it show up as a selection when you select French as the dropdown entry when create a site colleciton. You can go ahead and change the description and name to suit the language it's under.    Second: As part of my site definition, I packaed up several list templates, that were saved as STP files. When you navigate to the list template listing, well, the templates are for English sites, not French so I cannot create document libraries based on the template. What now? well here comes KWIzCom to the rescue! They seem to have put out a "STP language converter" where you can take a site template or list template and convert it to any target language you are after. It's a free download, Use it and you're good to go.  One thing I will mention is that when I convereted the English documents I whent ahead and converted them to French-Canadien. And it didn't work! so I finally figured out that the French Version it was expecting in the french site was "French-France". Don't know why that is, it's just what needs to be done to get that working. When I did that, I was able to use the List templates that I created in the English site for the French Site.   Hope it helps , good luck!

    Read the article

  • Should I be looking for an alternative to Zen Cart as my business grows?

    - by MarkS
    I created a business website for a family business which is growing. It's my family, and I'm a software developer, but I don't want to rebuild the wheels or be a shopping cart programmer. For this business, I need the web store to "just work", but... it gets complicated... There are two parts of this business website. One of them is driven by Wordpress and I use the awesome Thesis theme. This is modern, flexible, and saves me a lot of time from doing custom coding and styling. I couldn't be more pleased with this arrangement. The other part of the site is a Zen Cart store. It's administration and it's flexibility is frustrating and archaic Web 1.0. For the past few years, I keep hearing that the developers are working on a 2.0 version of Zen Cart, but they haven't communicated anything significant in the past few years other than to say, "When it's ready, we'll let you know." What I'm looking for in a cart, I would need to install 6-10 additional mods, and would need to do a lot of custom coding. I'm now willing to pay for a top-notch e-commerce solution for a small business that we can grow up into a larger business over time. Requirements: Extremely flexible shipping that let's us set up rules per product/category, tables of rates, calculated rates, max package weighs, etc. (flexibility like that available with CEON Advance Shipping Module for Zen Cart Coupons and gift certificates Manual order entry for phone orders Multi-channel support (We also sell on Amazon, eBay, use Google Base and we want to maintain one set of inventory and have it kept current) Decent SEO features Reviews and star-ratings on products Easy social networking features for sharing, following, liking, etc) Easy integration with AdWords and analytics tracking Modern and very usable product and store administration (Like I was saying, I'm spoiled by Wordpress and Thesis) At the end of the day, I don't care if it's a hosted solution or if I have to host it myself. I just want something that is going to stay up-to-date, regularly be maintained and improved, and if I have to update it, things like the one-click update present in Wordpress is something it has to have. Professional Webmasters, if you had to run a store / website, but you had to spend your time focusing on your sales and marketing efforts rather than diffing php files and copying and tweaking them to change even the slightest details of your site, what would you choose?

    Read the article

  • How do programmers deal with Project Lead/Managers?

    - by Simon
    Project Managers/Technical Leads sometimes tend to be over enthusiastic when it comes to software. But during code reviews if instead of functionality of the code the only complain one hears is about formatting/spacing and similar trivial things, when there are far better things to discuss (Among other things I have noticed the sometimes during the so called "reviews" suggestions are made that implementation needs a re-write just because it doesnt use the most happening technology/buzzword) How do fellow programmers deal with such scenarios? or is this just a one off? (or is the fault entirely on me )If you have similar experience and what you did to overcome it? Feel free to share.

    Read the article

  • How to install google chrome from the official repos?

    - by Suhaib
    I followed this question : How to install Chrome browser properly via command line? However, It is not the same as the one mentioned in google's website : http://www.google.com/linuxrepositories/ I want to do this one instead. The process is: On an APT-based system (Debian, Ubuntu, etc.), download the key and then use apt to install it. wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add - sudo apt-get update what would be the next step ? I tried sudo apt-get intsall google-chrome and I ended up with this error Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package google-chrome

    Read the article

  • OAM11gR2: Enabling SSL in the Data Store

    - by Ekta Malik
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Enabling SSL in the Data Store of OAM11gR2 comprises of the below mentioned steps. Import the certificate/s required for establishing the trust with the Store(backend) in the keystore(cacerts) on the machine hosting OAM's Weblogic Admin server Restart the Weblogic Admin server Specify the <Hostname>:<SSL port> in the "Location" field of the Data Store and select the "Enable SSL" checkbox Pre-requisite:- Certificate/s to be imported are available for import Data Store has already been created using OAM admin console and the connection to the store is successful on non-SSL port( though one can always create a Data Store with SSL settings on the first go) Steps for importing the certificate/s:- One can use the keytool utility that comes bundled with JDK to import the certificate. The step for importing the certificate would be same for self-signed and third party certificates (like VeriSign) $JAVA_HOME/bin/keytool -import -v -noprompt -trustcacerts -alias <aliasname> -file <Path to the certificate file> -keystore $JAVA_HOME/jre/lib/security/cacerts Here $JAVA_HOME refers to the path of JDK install directory Note: In case multiple certificates are required for establishing the trust, import all those certificates using the same keytool command mentioned above  One can verify the import of the certificate/s by using the below mentioned command $JAVA_HOME/bin/keytool -list -alias <aliasname>-v -keystore $JAVA_HOME/jre/lib/security/cacerts When the trust gets established for the SSL communication, specifying the SSL specific settings in the Data Store (via OAM admin console) wouldn't result into the previously seen error (when Certificates are yet to be imported) and the "Test Connection" would be successful.

    Read the article

  • Low coupling processing big quantities of data

    - by vitalik
    Usually I achieve low coupling by creating classes that exchange lists, sets, and maps between them. Now I am developing a batch application and I can't put all the data inside a data structure because there isn't enough memory. I have to read and process one chunk of data and then going to the next one. So having low coupling is much more difficult because I have to check somewhere if there is still data to read, etc. What I am using now is: Source - Process - Persist The classes that process have to ask to the Source classes if there are more rows to read. What are the best practices and or useful patterns in such situations? I hope I am explaining myself, if not tell me.

    Read the article

  • What is Database Continuous Integration?

    - by SQLDev
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • What is Database Continuous Integration?

    - by David Atkinson
    Although not everyone is practicing continuous integration, many have at least heard of the concept. A recent poll on www.simple-talk.com indicates that 40% of respondents are employing the technique. It is widely accepted that the earlier issues are identified in the development process, the lower the cost to the development process. The worst case scenario, of course, is for the bug to be found by the customer following the product release. A number of Agile development best practices have evolved to combat this problem early in the development process, including pair programming, code inspections and unit testing. Continuous integration is one such Agile concept that tackles the problem at the point of committing a change to source control. This can alternatively be run on a regular schedule. This triggers a sequence of events that compiles the code and performs a variety of tests. Often the continuous integration process is regarded as a build validation test, and if issues were to be identified at this stage, the testers would simply not 'waste their time ' and touch the build at all. Such a ‘broken build’ will trigger an alert and the development team’s number one priority should be to resolve the issue. How application code is compiled and tested as part of continuous integration is well understood. However, this isn’t so clear for databases. Indeed, before I cover the mechanics of implementation, we need to decide what we mean by database continuous integration. For me, database continuous integration can be implemented as one or more of the following: 1)      Your application code is being compiled and tested. You therefore need a database to be maintained at the corresponding version. 2)      Just as a valid application should compile, so should the database. It should therefore be possible to build a new database from scratch. 3)     Likewise, it should be possible to generate an upgrade script to take your already deployed databases to the latest version. I will be covering these in further detail in future blogs. In the meantime, more information can be found in the whitepaper linked off www.red-gate.com/ci If you have any questions, feel free to contact me directly or post a comment to this blog post.

    Read the article

  • Digital Agenda in the EU means open standards after all

    - by trond-arne.undheim
    European Commission Vice President Neelie Kroes speech on Openness at the heart of the EU Digital Agenda at Open Forum Europe 2010 Summit in Brussels refocuses the EU Digital Agenda on open standards. I say the speech scores a 90/100, smooth, smart, a little vicious at the fringes, maybe? Anyway, it shows the strategy might age and implement well. This is Dutch pragmatism at its best. The EU Digital Agenda (I give it an 85/100 score), while laudable, stops short of using the term. The next step for the European Commission is defining the term open standards. If they do that, and do it right, Vice President Kroes will go into history as having made a significant contribution towards global progress in e-government by possibly eradicating lock-in forever. Moreover, she will put Europe's SMEs in a better position to succeed in a global IT market filled with barriers to entry from players not fully understanding, using, or unpacking standards. Kroes' interesting suggestion that she will now explore a "legal proposal" on interoperability that will have an impact on all IT companies operating in the European market is more up for debate. An interoperability directive? One run by DG COMP or one run by DG INFSO, telecom style? Would something like that work? Would the industry like it? Would it help European governments? Possibly, if done right. The good thing was, Kroes pointed out that she will look for input from the industry. Kroes' track record is one of not being scared of taking on the Titans. She also wants to enact real, positive, lasting change. "I will not go anywhere", she said. All of that is good. And she does understand the importance of open standards. Let's now start discussing the details. Implementing the Digital Agenda is not simple. It requires collaboration across the various Directorates in the European Commission. Mounting a new Interoperability directive is also never attempted before. Getting it right is important. Even possibly finding out it cannot be done right and choosing a more light weight approach that is equally effective would be bold. Go Kroes!

    Read the article

  • The First Annual Crappy Code Games

    - by Testas
    SQLBits announced some super-exciting news! A tie-up with our platinum sponsor, Fusion-io. Together we'll be running a series of events called "The Crappy Code Games" where SQL Server developers will compete to write the worst-performing code and win some very cool prizes including:   •        Gold: A hands-on, high performance flying day for two at Ultimate High plus Fusion-io flight jackets•        Silver: One day racing experience at Palmer Sports where you will drive seven different high performance cars•        Bronze: Pure Tech Racing 10 person package at PTR’s F1 racing facility includes FI tees, food and drinks. …plus iPods, Windows Mobile phones, X-box 360s, t-shirts and much more. There will be two qualifying events in Manchester on March 17th and London on March 31st, and the third qualifier as well as the grand finale will be held in the evening of Thursday April 7th at SQLBits. And if that isn’t cool enough, Fusion-io's Chief Scientist Steve Wozniak (yes, that Steve Wozniak, tech industry legend and co-founder of Apple) will be on hand in Brighton to hand out the prizes! If you'd like to take part you'll need to register, and since places are limited we recommend you do so right away. For more details and to register, go to http://www.crappycodegames.com/ The Games: In conjunction with SQL Bits, dbA-thletes (that’s you) will compete  head-to-head in one of three separate qualifying events to be held in Manchester, London and Brighton.  Four separate SQL  rounds make up the evening’s Games, and will challenge you to write code that pushes the boundaries of SQL performance.  The four events are: ?  The High Jump: Generate the highest I/O per second ?  The 100 m dash: Cumulative highest number of I/O’s in 60 seconds ?  The SSIS-athon: Load one billion row fact table in the shortest time ?  The Marathon: Generate the highest MB per second in 60 seconds

    Read the article

  • Dependency Injection/IoC container practices when writing frameworks

    - by Dave Hillier
    I've used various IoC containers (Castle.Windsor, Autofac, MEF, etc) for .Net in a number of projects. I have found they tend to encourage a number of bad practices. Are there any established practices for IoC container use, particularly when providing a platform/framework? My aim as a framework writer is to make code as simple and as easy to use as possible. I'd rather write one line of code to construct an object than ten or even just two. For example, a couple of code smells that I've noticed and don't have good suggestions to: Large number of parameters (5) for constructors. Creating services tends to be complex; all of the dependencies are injected via the constructor - despite the fact that the components are rarely optional (except for maybe in testing). Lack of private and internal classes; this one may be a specific limitation of using C# and Silverlight, but I'm interested in how it is solved. It's difficult to tell what a frameworks interface is if all the classes are public; it allows me access to private parts that I probably shouldnt touch. Coupling the object lifecycle to the IoC container. It is often difficult to manually construct the dependencies required to create objects. Object lifecycle is too often managed by the IoC framework. I've seen projects where most classes are registered as Singletons. You get a lack of explicit control and are also forced to manage the internals (it relates to the above point, all classes are public and you have to inject them). For example, .Net framework has many static methods. such as, DateTime.UtcNow. Many times I have seen this wrapped and injected as a construction parameter. Depending on concrete implementation makes my code hard to test. Injecting a dependency makes my code hard to use - particularly if the class has many parameters. How do I provide both a testable interface, as well as one that is easy to use? What are the best practices?

    Read the article

  • Modified Strategy Design Pattern

    - by Samuel Walker
    I've started looking into Design Patterns recently, and one thing I'm coding would suit the Strategy pattern perfectly, except for one small difference. Essentially, some (but not all) of my algorithms, need an extra parameter or two passed to them. So I'll either need to pass them an extra parameter when I invoke their calculate method or store them as variables inside the ConcreteAlgorithm class, and be able to update them before I call the algorithm. Is there a design pattern for this need / How could I implement this while sticking to the Strategy Pattern? I've considered passing the client object to all the algorithms, and storing the variables in there, then using that only when the particular algorithm needs it. However, I think this is both unwieldy, and defeats the point of the strategy pattern. Just to be clear I'm implementing in Java, and so don't have the luxury of optional parameters (which would solve this nicely).

    Read the article

  • IFMR Conference – Global Procurement & Supply Chain Management for the Oil & Gas Industry

    - by Pam Petropoulos
    Dates: June 9 - 11, 2014Location: JW Marriott Houston, TXThis 2nd Global Procurement and Supply Chain Management Conference for the Oil & Gas Industry will cover key market challenges including: - supplier / operator relationships- benchmarking strategic procurement and category management- capacity overload vs. demand- new frontiers /new procurement strategies - sustainability in procurement and supply chain With a one-track focus, this is a highly intensive, content-driven event that includes case studies, presentations and panel discussions over two full days.Plan to attend the Oracle presentation on day one, and the Oracle panel discussion on day two. Oil & Gas experts will be available in the Oracle booth to answer questions.Click here to learn more and register.

    Read the article

  • Mixed Emotions: Humans React to Natural Language Computer

    - by Applications User Experience
    There was a big event in Silicon Valley on Tuesday, November 15. Watson, the natural language computer developed at IBM Watson Research Center in Yorktown Heights, New York, and its inventor and principal research investigator, David Ferrucci, were guests at the Computer History Museum in Mountain View, California for another round of the television game Jeopardy. You may have read about or watched on YouTube how Watson beat Ken Jennings and Brad Rutter, two top Jeopardy competitors, last February. This time, Watson swept the floor with two Silicon Valley high-achievers, one a venture capitalist with a background  in math, computer engineering, and physics, and the other a technology and finance writer well-versed in all aspects of culture and humanities. Watson is the product of the DeepQA research project, which attempts to create an artificially intelligent computing system through advances in natural language processing (NLP), among other technologies. NLP is a computing strategy that seeks to provide answers by processing large amounts of unstructured data contained in multiple large domains of human knowledge. There are several ways to perform NLP, but one way to start is by recognizing key words, then processing  contextual  cues associated with the keyword concepts so that you get many more “smart” (that is, human-like) deductions,  rather than a series of “dumb” matches.  Jeopardy questions often require more than key word matching to get the correct answer; typically several pieces of information put together, often from vastly different categories, to come up with a satisfactory word string solution that can be rephrased as a question.  Smarter than your average search engine, but is it as smart as a human? Watson was especially fast at descrambling mixed-up state capital names, and recalling and pairing movie titles where one started and the other ended in the same word (e.g., Billion Dollar Baby Boom, where both titles used the word Baby). David said they had basically removed the variable of how fast Watson hit the buzzer compared to human contestants, but frustration frequently appeared on the faces of the contestants beaten to the punch by Watson. David explained that top Jeopardy winners like Jennings achieved their success with a similar strategy, timing their buzz to the end of the reading of the clue,  and “running the board”, being first to respond on about 60% of the clues.  Similar results for Watson. It made sense that Watson would be good at the technical and scientific stuff, so I figured the venture capitalist was toast. But I thought for sure Watson would lose to the writer in categories such as pop culture, wines and foods, and other humanities. Surprisingly, it held its own. I was amazed it could recognize a word definition of a syllogism in the category of philosophy. So what was the audience reaction to all of this? We started out expecting our formidable human contestants to easily run some of their categories; however, they started off on the wrong foot with the state capitals which Watson could unscramble so efficiently. By the end of the first round, contestants and the audience were feeling a little bit, well, …. deflated. Watson was winning by about $13,000, and the humans had gone into negative dollars. The IBM host said he was going to “slow Watson down a bit,” and the humans came back with respectable scores in Double Jeopardy. This was partially thanks to a very sympathetic audience (and host, also a human) providing “group-think” on many questions, especially baseball ‘s most valuable players, which by the way, couldn’t have been hard because even I knew them.  Yes, that’s right, the humans cheated. Since Watson could speak but not hear us (it didn’t have speech recognition capability), it was probably unaware of this. In Final Jeopardy, the single question had to do with law. I was sure Watson would blow this one, but all contestants were able to answer correctly about a copyright law. In a career devoted to making computers more helpful to people, I think I may have seen how a computer can do too much. I’m not sure I’d want to work side-by-side with a Watson doing my job. Certainly listening and empathy are important traits we humans still have over Watson.  While there was great enthusiasm in the packed room of computer scientists and their friends for this standing-room-only show, I think it made several of us uneasy (especially the poor human contestants whose egos were soundly bashed in the first round). This computer system, by the way , only took 4 years to program. David Ferrucci mentioned several practical uses for Watson, including medical diagnoses and legal strategies. Are you “the expert” in your job? Imagine NLP computing on an Oracle database.   This may be the user interface of the future to enable users to better process big data. How do you think you’d like it? Postscript: There were three little boys sitting in front of me in the very first row. They looked, how shall I say it, … unimpressed!

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >