Search Results

Search found 4605 results on 185 pages for 'social marketing'.

Page 165/185 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • CFOs: Do You Have a Playbook for Growth?

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs In most global markets, CFOs are optimistic about their company's growth opportunities. Deloitte's CFO Signals Report, "Time to Accelerate" found that: In the U.K. business optimism is at its highest level in three-and-a-half years Optimism in North America rose from a strong +42% last quarter (Q2 to Q3 2013) to an even stronger +54%. The inaugural Southeast Asia survey, 44% of CFOs reported a positive outlook despite worries over the Chinese economy and political uncertainty. Sustainable and profitable business growth doesn't usually happen by accident. Company's need a playbook for growth that's owned by the CFO. And today, that playbook must leverage the six enabling technologies--Social, Big Data, Mobile, Cloud, Analytics, and The Internet of Things (or, as Oracle president Mark Hurd explains, "The Internet of the People"). On Monday June 9 at  2:00 pm Eastern, CFO.com is hosting a webcast, "The CFO Playbook on Growth: How CFOs Can Boost Efficiency and Performance with Automation". Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “Investing in technology begins with a business metric driven business case with clear tangible business results expected," says John Lieblang, Affiliate Partner with Waterstone Management Group. "The progressive CFO has learned how to forge a partnership with the CIO to align everyone in the 'result value chain' to be accountable for the business results not just for functional technology.” Click HERE to register  Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • Customer Loyalty vs. Customer Engagement: Who Cares?

    - by Jeb Dasteel-Oracle
    Have you read the recent Forbes OracleVoice blog titled Customer Loyalty is Dead. Long Live Engagement!? If you haven’t, take a look. This article prompted lots of conversation in the social realm. Many who read the article voiced their reactions to the headline and now I’m jumping in to add my view. Normal 0 false false false EN-US X-NONE X-NONE Customer loyalty is still key. It’s the effect and engagement is the cause. We at least know that to be true for our customers. We are in an age where customers are demanding to be heard. We need them to be actively involved – or engaged – as well. Greater levels of customer engagement, properly targeted, positively correlate with satisfaction. Our data has shown us this over and over. Satisfied customers are more loyal and more willing to vocalize their satisfaction through referencing, and are more likely to purchase again, all of which in turn drives incremental revenue – from the customer doing the referencing AND the customer on the receiving end of that reference. Turning this around completely, if we begin to see the level of a customer’s engagement start to wane, this is an indicator that their satisfaction, loyalty, and future revenue are likely at risk. At Oracle, we’ve put in place many programs to target, encourage, and then track engagement, allowing us to measure engagement as a determinant of loyalty. Some of these programs include our Key Accounts, solution design and architectural, Executive Sponsorship, as well as executive advisory boards. Specific programs allow us to engage specific contacts within specific customer organizations (based on role) and then systematically track their engagement activities over time, along side of tracking customer satisfaction, loyalty, referenceability, and incremental revenue contribution. Continuous measurement of engagement allows us to better understand customer views of what it means to partner with a provider and adjust program participation to better meet the needs of the partnership. We can also track across customer segments, and design new programs that are even more effective than the ones we have in place today. In case you missed any of my previous Forbes articles, I’ve included links below for easy access. Award-Winning Companies Put Customers First The Power of Peer Networks: 5 Reasons to Get (and Stay) Involved Technology At Work: Traveling In Style Customer Central: 8 Strategies for Putting Customers at the Core of Your Business Technology at Work: Five Companies Doing IT Right /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Does *every* project benefit from written specifications?

    - by nikie
    I know this is holy war territory, so please read the question to the end before answering. There are many cases where written specifications make a lot of sense. For example, if you're a contractor and you want to get paid, you need written specs. If you're working in a team with 20 persons, you need written specs. If you're writing a programming language compiler or interpreter (and it's not perl), you'll usually write a formal specification. I don't doubt that there are many more cases where written specifications are a really good idea. I just think that there are cases where there's so little benefit in written specs, that it doesn't outweigh the costs of writing and maintaining them. EDIT: The close votes say that "it is difficult to say what is asked here", so let me clarify: The usefulness of written, detailed specifications is often claimed like a dogma. (If you want examples, look at the comments.) But I don't see the use of them for the kind of development I'm doing. So what is asked here is: How would written specifications help me? Background information: I work for a small company that's developing vertical market software. If our product is easier to use and has better performance than the competition, it sells. If it's harder to use, even if it behaves 100% as the specification says, it doesn't sell. So there are no "external forces" for having written specs. The advantage would have to be somewhere in the development process. Now, I can see how frozen specifications would make a developer's life easier. But we'll never have frozen specs. If we see in the middle of development that feature X is not intuitive to use the way it's specified, then we can only choose between changing the specification or developing a product that won't sell. You'll probably ask by now: How do you know when you're done? Well, we're continually improving our product. The competition does the same. So (hopefully) we're never done. We keep improving the software, and when we reach a point when the benefits of the improvements we've added since the last release outweigh the costs of an update, we create a new release that is then tested, localized, documented and deployed. This also means that there's rarely any schedule pressure. Nobody has to do overtime to make a deadline. If the feature isn't done by the time we want to release the next version, it'll simply go into the next version. The next question might be: How do your developers know what they're supposed to implement? The answer is: They have a lot of domain knowledge. They know the customers business well enough, so a high-level description of the feature (or even just the problem that the customer needs solved) is enough to implement it. If it's not clear, the developer creates a few fake screens to get feedback from marketing/management or customers, but this is nowhere near the level of detail of actual specifications. This might be inefficient for larger teams, but for a small team with low turnover it works quite well. It has the additional benefit that the developer in question often comes up with a better solution than the person writing the specs might have. This question is already getting very long, but let me address one last point: Testing. Like I said in the beginning, if our software behaves 100% like the spec says, it still can be crap. In fact, if it's so unintuitive that you need a spec to know how to test it, it probably is crap. It makes sense to have fixed, written tests for some core functionality and for regression bugs, but again, this is nowhere near a full written spec of how the software should behave when. The main test is: hand the software to a user who doesn't know it yet and tell him to use the new feature X. If she can figure out how to use it and it works, it works.

    Read the article

  • Using Content Analytics for More Effective Engagement

    - by Kellsey Ruppel
    Using Content Analytics for More Effective Engagement: Turning High-Volume Content into Templates for Success By Mitchell Palski, Oracle WebCenter Sales Consultant Many organizations use Oracle WebCenter Portal to develop these basic types of portals: Intranet portals used for collaboration, employee self-service, and company communication Extranet portals used by customers and partners for self-service and support Team collaboration portals that allow users to share documents and content, track activity, and engage in discussions Portals are intended to provide a personalized, single point of interaction with web-based applications and information. The user experiences that a Portal is capable of displaying should be relevant to an individual user or class of users (a group or role). The components of a Portal that would vary based on a user’s identity include: Web content such as images, news articles, and on-screen instruction Social tools such as threaded discussions, polls/surveys, and blogs Document management tools to upload, download, and edit files Web applications that present data visualizations and data entry modules These collections of content, tools, and applications make up valuable workspaces. The challenge that a development team may have is defining which combinations are the most effective for its users. No one wants to create and manage a workspace that goes un-used or (even worse) that is used but is ineffective. Oracle WebCenter Portal provides you with the capabilities to not only rapidly develop variations of portals, but also identify which portals are the most effective and should be re-used throughout an enterprise. Capturing Portal AnalyticsOracle WebCenter Portal provides an analytics service that allows administrators and business users to track and analyze portal usage. These analytics are captured in the form of: Usage tracking metrics Behavior tracking User Profile Correlation The out-of-the-box task reports that come with Oracle WebCenter Portal include: WebCenter Portal Traffic Page Traffic Login Metrics Portlet Traffic Portlet Response Time Portlet Instance Traffic Portlet Instance Response Time Search Metrics Document Metrics Wiki Metrics Blog Metrics Discussion Metrics Portal Traffic Portal Response Time By determining the usage and behavior tracking metrics that are associated with specific user profiles (including groups and roles), your administrators will be able to identify the components of your solution that are the most valuable.  Your first step as an administrator should be to identify the specific pages and/or components are used the most frequently. Next, determine the user(s) or user-group(s) that are accessing those high-use elements of a portal. It is also important to determine patterns in high-usage and see if they correlate to a specific schedule. One of the goals of any development team (especially those that are following Agile methodologies) should be to develop reusable web components to minimize redundant development. Oracle WebCenter Portal provides you the tools to capture the successful workspaces that have already been developed and identified so that they can be reused for similar user demographics. Re-using Successful PortalsWhen creating a new Portal in Oracle WebCenter, developers have the option to base that portal on a template that includes: Pre-seeded data such as pages, tools, user roles, and look-and-feel assets Specific sub-sets of page-layouts, tools, and other resources to standardize what is added to a Portal’s pages Any custom components that your team creates during development cycles Once you have identified a successful workspace and its most valuable components, leverage Oracle WebCenter’s ability to turn that custom portal into a portal template. By creating a template from your already successful portal, you are empowering your enterprise by providing a starting point for future initiatives. Your new projects, new teams, and new web pages can benefit from lessons learned and adjustments that have already been made to optimize user experiences instead of starting from scratch. ***For a complete explanation of how to work with Portal Templates, be sure to read the Fusion Middleware documentation available online.

    Read the article

  • Cloud Fact for Business Managers #3: Where You Data Is, and Who Has Access to It Might Surprise You

    - by yaldahhakim
    Written by: David Krauss While data security and operational risk conversations usually happen around the desk of a CCO/CSO (chief compliance and/or security officer), or perhaps the CFO, since business managers are now selecting cloud providers, they need to be able to at least ask some high-level questions on the topic of risk and compliance.  While the report found that 76% of adopters were motivated to adopt cloud apps because of quick access to software, most of these managers found that after they made a purchase decision their access to exciting new capabilities in the cloud could be hindered due to performance and scalability constraints put forth  by their cloud provider.  If you are going to let your business consume their mission critical business applications as a service, then it’s important to understand who is providing those cloud services and what kind of performance you are going to get.  Different types of departments, companies and industries will all have unique requirements so it’s key to take this also into consideration.   Nothing puts a CEO in a bad mood like a public data breach or finding out the company lost money when customers couldn’t buy a product or service because your cloud service provider had a problem.  With 42% of business managers having seen a data security breach in their department associated directly with the use of cloud applications, this is happening more than you think.   We’ve talked about the importance of being able to avoid information silos through a unified cloud approach and platform.  This is also important when keeping your data safe and secure, and a key conversation to have with your cloud provider.  Your customers want to know that their information is protected when they do business with you, just like you want your own company information protected.   This is really hard to do when each line of business is running different cloud application services managed by different cloud providers, all with different processes and controls.   It only adds to the complexity, and the more complex, the more risky and the chance that something will go wrong. What about compliance? Depending on the cloud provider, it can be difficult at best to understand who has access to your data, and were your data is actually stored.  Add to this multiple cloud providers spanning multiple departments and it becomes very problematic when trying to comply with certain industry and country data security regulations.  With 73% of business managers complaining that having cloud data handled externally by one or more cloud vendors makes it hard for their department to be compliant, this is a big time suck for executives and it puts the organization at risk. Is There A Complete, Integrated, Modern Cloud Out there for Business Executives?If you are a business manager looking to drive faster innovation for your business and want a cloud application that your CIO would approve of, I would encourage you take a look at Oracle Cloud.  It’s everything you want from a SaaS based application, but without compromising on functionality and other modern capabilities like embedded business intelligence, social relationship management (for your entire business), and advanced mobile.  And because Oracle Cloud is built and managed by Oracle, you can be confident that your cloud application services are enterprise-grade.  Over 25 Million users and 10 thousands companies around the globe rely on Oracle Cloud application services everyday – maybe your business should too.  For more information, visit cloud.oracle.com. Additional Resources •    Try it: cloud.oracle.com•    Learn more: http://www.oracle.com/us/corporate/features/complete-cloud/index.html•    Research Report: Cloud for Business Managers: The Good, the Bad, and the Ugly

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Deleting file with SharePoint List web service fails

    - by Robert MacLean
    I am trying to delete a file from SharePoint using the list web service which is failing with the following error. Error Code: 0x81020030 Message: Invalid file name Detail: The file name you specified could not be used. It may be the name of an existing file or directory, or you may not have permission to access the file. The update XML I sent through is: <Batch OnError="Continue" PreCalc="TRUE" ListVersion="0" ViewName="{8FE4E2C8-939E-4462-ABA2-D633EED7F76E}"><Method ID="1" Cmd="Delete"><Field Name="ID">84</Field><Field Name="FileRef">http://win-4h0xp59sn75:40414/Shared%20Documents/del.txt</Field></Method></Batch> The SharePoint server error logs indicate: ERROR: Failed to OpenThreadToken, LastError=1008 The file you are attempting to save or retrieve has been blocked from this Web site by the server administrators. Things I have tried I've tried the changes in #1372971 which has no helped. I have also tried the changes recommended on the Microsoft Social site, which has also not helped. I have confirmed that the txt file extension is not blocked as indicated here. In addition I can remove the file via the website, it is just on the web service that this fails. The permissions are correct (or rather not in play) as I am running as a SharePoint administrator, which is the same account that uploaded it via the copy web service.

    Read the article

  • VSTS 2010 SGEN : error : Could not load file or assembly (Exception from HRESULT: 0x80131515)

    - by Developer IT
    Hi ! I am experiencing a strange issue with VS2010. We use TFS to build our API dlls and we used to reference them in our projects usign a mapped network drive that was fully trusted. We have been working like that for at least two years and everything worked perfectly. Today, I converted a webapp to vs2010 and when I compile it in Release, it's giving me: SGEN : error : Could not load file or assembly 'file:///L:\Api\Release API_20100521.1\Release\CS.API.Exceptions.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) The strange thing is that it's working when it's under the Debug profile... I tried adding the <runtime> <loadFromRemoteSources enabled="true" /> </runtime> into app.config and still no luck (See http://social.msdn.microsoft.com/Forums/en/msbuild/thread/d12f6301-85bf-4b9e-8e34-a06398a60df0 and http://msdn.microsoft.com/en-us/library/dd409252(VS.100).aspx) I am pretty sure that this issue is from visual studio or msbuild, as our code won't run from a network share when in prod because all the referenced dll's are copied into the bin folder. If anyone has an solution (or just an idea for a search path) please let me know !

    Read the article

  • Web application development platform recommendation

    - by TK.Maxi
    Hi all I did a year's worth of Pascal, Visual Basic and C++ 15 years ago, so suffice it to say that I'm a complete n00b & lamer when it comes to this. I really do hope that this question doesn't canned, but if it does, please be so kind as to point me in the direction of where it should be posted. I have an idea, like so many others, for a web app. I don't necessarily have the capital to outsource the development of the app right now, and I probably wouldn't want to, since non-disclosure agreements can be expensive to enforce, especially in this day and age of intercontinental outsourcing. I need the app to be usable on any mobile device (eventually), primarily on the major mobile platforms at first, on the web, (pc/mac/*ix) obviously, on mobile web browsers like opera mobile, etc. I envisage the app interacting with the major social networks like fb, orkut, msn im, twitter, et al in a way where friend's are messaged and/or wall posted, a message is posted to the users wall. Geo-location functionality is a plus, considering the service/app can be location sensitive in two ways, 1, the immediate location of the user, 2. the desired location of the user. I'd like to incorporate OpenID sign on, and the flip-side, the service will require that people (service providers) list their specialities/specialisations/interests/areas of expertise, so that matches to user requests can be made by the service, while users' requests are posted into the web universe. I've probably described a glut of apps out there, but I'd appreciate feedback on the sort of platform that I should look at using, be it hosted on something like Google's app engine, or written in android friendly code, or whatever. I'm a firm believer in herd mentality, especially at the start of a project that I have very little experience in. The more opinions, the merrier! I can't get very much more specific, since that would give the idea away. Thanks for your time and I look forward to hearing from wise and experienced and the fresh and innovative alike. Thanks

    Read the article

  • Restore DB - Error RESTORE HEADERONLY is terminating abnormally.

    - by Jordon
    I have taken backup of SQL Server 2008 DB on server, and download them to local environment. I am trying to restore that database and it is keep on giving me following error. An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ ADDITIONAL INFORMATION: The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4053&EvtSrc=MSSQLServer&EvtID=3241&LinkId=20476 I have tried to create a temp DB on server and tried to restore the same backup file and that works. I have also tried no. of times downloading file from server to local pc using different options on Filezila (Auto, Binary) But its not working. After that I tried to execute following command on server. BACKUP DATABASE go4sharepoint_1384_8481 TO DISK=' C:\HostingSpaces\dbname_jun14_2010_new.bak' with FORMAT And it is giving me following error. Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'c:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Backup\ C:\HostingSpaces\dbname_jun14_2010_new.bak'. Operating system error 123(The filename, directory name, or volume label syntax is incorrect.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. After researching I found following 2 useful links 1) http://support.microsoft.com/kb/290787 2) http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/4d5836f6-be65-47a1-ad5d-c81caaf1044f But still I can't able to restore Database correctly. Any help would be much appreciated thanks.

    Read the article

  • R and version control for the solo data analyst

    - by Jeromy Anglim
    Many data analysts that I respect use version control. For example: http://github.com/hadley/ See comments on http://permut.wordpress.com/2010/04/21/revision-control-statistics-bleg/ However, I'm evaluating whether adopting a version control system such as git would be worthwhile. A brief overview: I'm a social scientist who uses R to analyse data for research publications. I don't currently produce R packages. My R code for a project typically includes a few thousand lines of code for data input, cleaning, manipulation, analyses, and output generation. Publications are typically written using LaTeX. With regards to version control there are many benefits which I have read about, yet they seem to be less relevant to the solo data analyst. Backup: I have a backup system already in place. Forking and rewinding: I've never felt the need to do this, but I can see how it could be useful (e.g., you are preparing multiple journal articles based on the same dataset; you are preparing a report that is updated monthly, etc) Collaboration: Most of the time I am analysing data myself, thus, I woudln't get the collaboration benefits of version control. There are also several potential costs involved with adopting version control: Time to evaluate and learn a version control system A possible increase in complexity over my current file management system However, I still have the feeling that I'm missing something. General guides on version control seem to be addressed more towards computer scientists than data analysts. Thus, specifically in relation to data analysts in circumstances similar to those listed above: Is version control worth the effort? What are the main pros and cons of adopting version control? What is a good strategy for getting started with version control for data analysis with R (e.g., examples, workflow ideas, software, links to guides)?

    Read the article

  • Restore DB - Error RESTORE HEADERONLY is terminating abnormally

    - by Jordon Willis
    I have taken backup of SQL Server 2008 DB on server, and download them to local environment. I am trying to restore that database and it is keep on giving me following error. An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ ADDITIONAL INFORMATION: The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4053&EvtSrc=MSSQLServer&EvtID=3241&LinkId=20476 I have tried to create a temp DB on server and tried to restore the same backup file and that works. I have also tried no. of times downloading file from server to local pc using different options on Filezila (Auto, Binary) But its not working. After that I tried to execute following command on server. BACKUP DATABASE go4sharepoint_1384_8481 TO DISK=' C:\HostingSpaces\dbname_jun14_2010_new.bak' with FORMAT It is giving me following error: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'c:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Backup\ C:\HostingSpaces\dbname_jun14_2010_new.bak'. Operating system error 123(The filename, directory name, or volume label syntax is incorrect.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. After researching I found the following 2 useful links: http://support.microsoft.com/kb/290787 http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/4d5836f6-be65-47a1-ad5d-c81caaf1044f But I am still not able to restore Database correctly. Any help would be much appreciated. Thanks.

    Read the article

  • Thrift,.NET,Cassandra - Is this is right combination?

    - by Vadi
    I've been evaluating technology stack for developing a social network based application. Below are the stack I think could well suitable for this application type of application: GUI -- ASP.NET MVC, Flash (Flex) Business Services -- Thrift based services One of the advantage of using Thrift is to solve scaling problems that will come in future when the user base increases rapidly. All the business logic can be exposed as a services using REST,JSON etc., This also allows us to go with C++ or Erlang based services when situation demands. Database -- mySQL, CasSandara mySQL can be used for storing the data which needs to be persisted. Cassandara will be used for storing global identifiers to the persisted data. Since Cassandara is also very good at scaling by introducing more nodes this will leverage Thrift based services as well. And also there is native support between Cassandara and Thrift Cache Server -- Memcached Any requests from Business Services will only talk to Memcached if any non-dirty data is required, otherwise there will be some background jobs that will invalidate the cache from database. The question is: Is the Thrift which is open-sourced one is production-ready? Is it the right stack for services layer to choose when the application (GUI) is primarily gets developed in ASP.NET and DB is mysql? Is there any other caveats that anyone here experienced? One of the main objective behind this stack is to easily scale up with more nodes and also this helps us to use Linux boxes, it will reduce our cost significantly Thoughts please ..

    Read the article

  • WCF - Increase ReaderQuoatas on REST service

    - by Christo Fur
    I have a WCF REST Service which accepts a JSON string One of the parameters is a large string of numbers This causes the following error - which is visible by tracing and using SVC Trace Viewer There was an error deserializing the object of type CarConfiguration. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Now I've read all sorts of articles advising how to rectify this All of them recommend increasing various config settings on the server and client e.g. http://stackoverflow.com/questions/65452/error-serializing-string-in-webservice-call http://bloggingabout.net/blogs/ramon/archive/2008/08/20/wcf-and-large-messages.aspx http://social.msdn.microsoft.com/Forums/en/wcf/thread/f570823a-8581-45ba-8b0b-ab0c7d7fcae1 So my config file looks like this <webHttpBinding> <binding name="webBinding" maxBufferSize="5242880" maxReceivedMessageSize="5242880" > <readerQuotas maxDepth="5242880" maxStringContentLength="5242880" maxArrayLength="5242880" maxBytesPerRead="5242880" maxNameTableCharCount="5242880"/> </binding> </webHttpBinding> ... ... ... <endpoint address="/" binding="webHttpBinding" bindingConfiguration="webBinding" My problem is that I can change this on the server, but there are no WCF config settings on the client as its a REST service and I'm just making a http request using the WebClient object any ideas?

    Read the article

  • Restarting explorer from a batch file only opens an explorer window

    - by Ben Hooper
    In one part of a batch file (kind of), I need to restart Explorer. I use the following method to accomplish this: taskkill /f /im explorer.exe >nul explorer.exe :: I have also tried: %winDir%\explorer.exe :: start %winDir%\explorer.exe :: start /b %winDir%\explorer.exe :: start /d %winDir%\explorer.exe (as suggested by panda-34) :: :: I've even tried delaying the above commands with: ping localhost -n 11 >nul Then this happens: explorer.exe is successfully terminated (denoted by the lack of taskbar and desktop) An explorer window opens, which I am left with indefinitely (see Image 1) I can only then restart explorer by manually starting a new process from Task Manager (Win + R doesn't respond), even though explorer.exe is actually already in the process list, strangely enough.. (see Image 2)   Now, I say "kind of" as I'm running the batch file from a self-executing SFX archive, created with WinRAR. When executed, the contents of the archive are extracted to %temp% and a user-defined bootstrapper (in this case, my batch file) is run upon successful extraction. The strangest thing about it, though, is that if you manually extract the contents of the archive and run the batch file then explorer restarts correctly. It only ever glitches when called from an SFX. I'm experiencing this glitch on Windows 7 x64.   Link to an SFX archive demonstrating this, if anyone wants it: https://dl.dropbox.com/u/27573003/Social%20Distribution/restart-explorer.exe Image 1: Image 2:

    Read the article

  • Bing maps silverlight control pushpin scaling problems.

    - by Rares Musina
    Baiscally my problem is that i've adapted a piece of code found here http://social.msdn.microsoft.com/Forums/en-US/vemapcontroldev/thread/62e70670-f306-4bb7-8684-549979af91c1 which does exactly what I want it to do, that is scale some pushpin images according to the map's zoom level. The only problem is that I've adapted this code to run with the bing maps silverlight control (not virtual earth like in the original example) and now the images scale correclty, but they are repositioned and only reach the desired position when my zoom level is maximum. Any idea why? Help will be greatly appreciated :) Modified code below: var layer = new MapLayer(); map.AddChild(layer); //Sydney layer.AddChild(new Pin { ImageSource = new BitmapImage(new Uri("pin.png", UriKind.Relative)), MapInstance = map }, new Location(-33.86643, 151.2062), PositionMethod.Center); becomes something like layer.AddChild(new Pin { ImageSource = new BitmapImage(new Uri("pin.png", UriKind.Relative)), MapInstance = map }, new Location(-33.92485, 18.43883), PositionOrigin.BottomCenter); I am assuming it has something to do with a different way in which bing maps anchors its UIelements. Details on that are also very userful. Thank you!

    Read the article

  • Excel export displaying '#####...'

    - by Cypher
    I'm trying to export an Excel database into .txt (Tab Delimited), but some of my cells are quite large. When I export into a txt some of the cells are exported as '#######....' which is surprisingly useless. Has this happened to anyone else? Do you know an easy fix? Data from one cell of my column: Accounting, African Studies, Agricultural/Bioresource Engineering, Agricultural Economics, Agricultural Science, Anatomy/Cell Biology, Animal Biology, Animal Science, Anthropology, Applied Zoology, Architecture, Art History, Atmospheric/Oceanic Science, Biochemistry, Biology, Botanical Sciences, Canadian Studies, Chemical Engineering, Chemistry/Bio-Organic/Environmental/Materials,ChurchMusicPerformance, Civil Engineering/Applied Mechanics, Classics, Composition, Computer Engineering,ComputerScience, ContemporaryGerman Studies, Dietetics, Early Music Performance, Earth/Planetary Sciences, East Asian Studies, Economics, Electrical Engineering, English Literature/ Drama/Theatre/Cultural Studies, Entrepreneurship, Environment, Environmental Biology, Finance, Food Science, Foundations of Computing, French Language/Linguistics/Literature/Translation, Geography, Geography/ Urban Systems, German, German Language/Literature/Culture, Hispanic Languages/Literature/Culture,History,Humanistic Studies, Industrial Relations, Information Systems, International Business, International Development Studies, Italian Studies/Medieval/Renaissance, Jazz Performance, Jewish Studies, Keyboard Studies, Kindergarten/Elementary Education, Kindergarten/Elementary Education/Jewish Studies,Kinesiology, Labor/Management Relations, Latin American/Caribbean Studies, Linguistics, Literature/Translation, Management Science, Marketing, Materials Engineering,Mathematics,Mathematics/Statistics,Mechanical Engineering, Microbiology, Microbiology/Immunology, Middle Eastern Studies, Mining Engineering, Music, Music Education, MusicHistory,Music Technology,Music Theory,North American Studies, Nutrition,OperationsManagement,OrganizationalBehavior/Human Resources Management, Performing Arts, Philosophy, Physical Education, Physics, Physiology, Plant Sciences, Political Science, Psychology, Quebec Studies, Religious Studies/Scriptures/Interpretations/World Religions,ResourceConservation,Russian, Science for Teachers,Secondary Education, Secondary Education/Music, Secondary Education/Science, SocialWork, Sociology, Software Engineering, Soil Science, Strategic Management, Teaching of French/English as a Second Language, Theology, Wildlife Biology, Wildlife Resources, Women’s Studies.

    Read the article

  • Linq-to-sql Compiled Query returns object NOT belonging to submitted DataContext ?

    - by Vladimir Kojic
    Compiled query: public static class Machines { public static readonly Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault() ); public static Machine GetMachineById(IUnitOfWork unitOfWork, short id) { Machine machine; // Old code (working) //var machineRepository = unitOfWork.GetRepository<Machine>(); //machine = machineRepository.Find(m => m.MachineID == id).SingleOrDefault(); // New code (making problems) machine = QueryMachineById(unitOfWork.DataContext, id); return machine; } It looks like compiled query is returning result from another data context [TestMethod] public void GetMachinesTest() { using (var unitOfWork = IoC.Get<IUnitOfWork>()) { // Compile Query var machine = Machines.GetMachineById(unitOfWork, 3); } using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // Get From Repository var machineFromRepository = machineRepository.Find(m => m.MachineID == 2).SingleOrDefault(); var machine = Machines.GetMachineById(unitOfWork, 2); VerifyHuskyHostMachine(machineFromRepository, 2, "Machine 2", "222222", "H400RS", "MachineIconB.xaml", false, true, LicenseType.Licensed, InterfaceType.HuskyHostV2, "10.0.97.2:8080", "10.0.97.2", 8080, "4.0"); VerifyHuskyHostMachine(machine, 2, "Machine 2", "222222", "H400RS", "MachineIconB.xaml", false, true, LicenseType.Licensed, InterfaceType.HuskyHostV2, "10.0.97.2:8080", "10.0.97.2", 8080, "4.0"); Assert.AreSame(machineFromRepository, machine); // FAIL } } If I run other (complex) unit tests I'm getting as expected: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. Another Important information is that this test is under TransactionScope! UPDATE: It looks like next link is describing similar problem (is this bug solved ?): http://social.msdn.microsoft.com/Forums/en-US/linqprojectgeneral/thread/9bcffc2d-794e-4c4a-9e3e-cdc89dad0e38

    Read the article

  • Comparing objects with IDispatch to get main frame only (BHO)

    - by shaimagz
    I don't know if anyone familiar with BHO (Browser Helper Object), but an expert in c++ can help me too. In my BHO I want to run the OnDocumentComplete() function only on the main frame - the first container and not all the Iframes inside the current page. (an alternative is to put some code only when this is the main frame). I can't find how to track when is it the main frame that being populated. After searching in google I found out that each frame has "IDispatch* pDisp", and I have to compare it with a pointer to the first one. This is the main function: STDMETHODIMP Browsarity::SetSite(IUnknown* pUnkSite) { if (pUnkSite != NULL) { // Cache the pointer to IWebBrowser2. HRESULT hr = pUnkSite->QueryInterface(IID_IWebBrowser2, (void **)&m_spWebBrowser); if (SUCCEEDED(hr)) { // Register to sink events from DWebBrowserEvents2. hr = DispEventAdvise(m_spWebBrowser); if (SUCCEEDED(hr)) { m_fAdvised = TRUE; } } } else { // Unregister event sink. if (m_fAdvised) { DispEventUnadvise(m_spWebBrowser); m_fAdvised = FALSE; } // Release cached pointers and other resources here. m_spWebBrowser.Release(); } // Call base class implementation. return IObjectWithSiteImpl<Browsarity>::SetSite(pUnkSite); } This is where I want to be aware whether its the main window(frame) or not: void STDMETHODCALLTYPE Browsarity::OnDocumentComplete(IDispatch *pDisp, VARIANT *pvarURL) { // as you can see, this function get the IDispatch *pDisp which is unique to every frame. //some code } I asked this question on Microsoft forum and I got an answer without explaining how to actually implement that: http://social.msdn.microsoft.com/Forums/en-US/ieextensiondevelopment/thread/7c433bfa-30d7-42db-980a-70e62640184c

    Read the article

  • How to explain to users the advantages of dumb primary key?

    - by Hao
    Primary key attractiveness I have a boss(and also users) that wants primary key to be sophisticated/smart/attractive control number(sort of like Social Security number, or credit card number format) I just padded the primary key(in Views) with zeroes to appease their desire to make the control number sophisticated,smart and attractive. But they wanted it as: first 2 digits as client code, then 4 digits as year year, then last 4 digits as transaction number on that client on a given year, then reset the transaction number of client to 1 when next year flows. Each client's transaction starts with 1. e.g. WM20090001, WM20090002, BB2009001, WM20100001, BB20100001 But as I wanted to make things as simple as possible, I forgo embedding their suggested smartness in primary key, I just keep the primary key auto increments regardless of client and year. But to make it not dull-looking(they really are adamant to make the primary key as smart control number), I made the primary key appears to them smart, on view query, I put the client code and four digit year code on front of the eight-zero padded autoincrement key, i.e. WM200900000001. Sort of slug-like information on autoincremented primary key. Keeping primary key autoincrement regardless of any other information, we are able keep other potential side effects problem when they edit a record, for example, if they made a mistake of entering the transaction on WM, then they edit the client code to BB, if we use smart primary key, the primary keys of WM customer will have gaps in their control number. Or worse yet, instead of letting the control numbers have gaps/holes, the user will request that subsequent records of that gap should shift up to that gap and have their subsequent primary keys re-adjust(decremented). How do you deal with these user requests(reasonable or otherwise)? Do you yield to their request? Or just continue using dumb primary key and explain them the repercussions of having a very smart/sophisticated primary key and educate them the significant advantages of having a dumb primary key? P.S. quotable quote(http://articles.techrepublic.com.com/5100-10878_11-1044961.html): "If you hold your tongue the first time users ask what is for them a reasonable request, things will work a lot better in the end."

    Read the article

  • Can't find compiled resource bundles

    - by user351032
    I am using Adobe Flash Builder 4. I've run into this issue with my latest project, but I was able to re-create it with an almost empty project. Here is what I've done. Created a new Flex Project Created a locale/en_US folder within this project. Added a class that extends SparkDownloadProgressBar. All this class does is attempt to create a Label. When I try to debug this application, I get the following error. Error: Could not find compiled resource bundle 'components' for locale 'en_US'. at mx.resources::ResourceManagerImpl/installCompiledResourceBundle()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\resources\ResourceManagerImpl.as:340] at mx.resources::ResourceManagerImpl/installCompiledResourceBundles()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\resources\ResourceManagerImpl.as:269] at mx.resources::ResourceManagerImpl/processInfo()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\resources\ResourceManagerImpl.as:387] at mx.resources::ResourceManagerImpl()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\resources\ResourceManagerImpl.as:122] at mx.resources::ResourceManager$/getInstance()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\resources\ResourceManager.as:111] at mx.core::UIComponent()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\core\UIComponent.as:3728] at spark.components.supportClasses::TextBase()[E:\dev\4.0.0\frameworks\projects\spark\src\spark\components\supportClasses\TextBase.as:154] at spark.components::Label()[E:\dev\4.0.0\frameworks\projects\spark\src\spark\components\Label.as:384] at Preloader()[C:\SVN\Games\Social\Test\src\Preloader.as:21] at mx.preloaders::Preloader/initialize()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\preloaders\Preloader.as:253] at mx.managers::SystemManager/http://www.adobe.com/2006/flex/mx/internal::initialize()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\managers\SystemManager.as:1925] at mx.managers::SystemManager/initHandler()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\managers\SystemManager.as:2419] The Flex Compiler/Additional Compiler Arguments section does contain "-locale en_US", but I do not want to just remove this as I am planning to have this load different property files based on the localization region at run-time and how I understand it, I will need to add each locale that I am planning to use on the compile argument line. I am at a loss as to how to attack this problem. If you need anymore information from me to help with this, I will be more than happy to provide it. Thanks ahead of time for the help!

    Read the article

  • HttpWebRequest: The request was aborted: The request was canceled.

    - by Emeka
    I've been working on developing a middle man application of sorts, which uploads text to a CMS backend using HTTP post requests for a series of dates (usually 7 at a time). I am using HttpWebRequest to accomplish this. It seems to work fine for the first date, but when it starts the second date I get the System.Net.WebException: The request was aborted: The request was canceled. I've searched around and found the following big clues: http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/0d0afe40-c62a-4089-9d8b-fb4d206434dc http://www.jaxidian.org/update/2007/05/05/8 http://arnosoftwaredev.blogspot.com/2006/09/net-20-httpwebrequestkeepalive-and.html And they haven't been too helpful. I've tried overloading the GetWebReuqest but that doesn't make sense because I don't make any use of that function. Here is my code: http://pastebin.org/115268 I get the error on line 245 after it has run successfully at least once. I'd appreciate any help I can get as this is the last step in a project I've been working on for sometime. This is my first C#/VS project so I'm open to any tips but I would like to focus on getting this problem solved first. THanks!

    Read the article

  • What scalability problems have you solved using a NoSQL data store?

    - by knorv
    NoSQL refers to non-relational data stores that break with the history of relational databases and ACID guarantees. Popular open source NoSQL data stores include: Cassandra (tabular, written in Java, used by Facebook, Twitter, Digg, Rackspace, Mahalo and Reddit) CouchDB (document, written in Erlang, used by Engine Yard and BBC) Dynomite (key-value, written in C++, used by Powerset) HBase (key-value, written in Java, used by Bing) Hypertable (tabular, written in C++, used by Baidu) Kai (key-value, written in Erlang) MemcacheDB (key-value, written in C, used by Reddit) MongoDB (document, written in C++, used by Sourceforge, Github, Electronic Arts and NY Times) Neo4j (graph, written in Java, used by Swedish Universities) Project Voldemort (key-value, written in Java, used by LinkedIn) Redis (key-value, written in C, used by Engine Yard, Github and Craigslist) Riak (key-value, written in Erlang, used by Comcast and Mochi Media) Ringo (key-value, written in Erlang, used by Nokia) Scalaris (key-value, written in Erlang, used by OnScale) ThruDB (document, written in C++, used by JunkDepot.com) Tokyo Cabinet/Tokyo Tyrant (key-value, written in C, used by Mixi.jp (Japanese social networking site)) I'd like to know about specific problems you - the SO reader - have solved using data stores and what NoSQL data store you used. Questions: What scalability problems have you used NoSQL data stores to solve? What NoSQL data store did you use? What database did you use before switching to a NoSQL data store? I'm looking for first-hand experiences, so please do not answer unless you have that.

    Read the article

  • Has anyone used Rational Team Concert (RTC)?

    - by FryGuy
    The company I work for is currently evaluating replacements for SourceSafe, and for various reasons, I think RTC will be chosen. I'm a little scared that we're going to end up with a solution that isn't the best for us in our situation. I've tried researching a little bit about what it is, but all I have been able to find are marketing things, but nothing about how it actually works (any of the paradigms it uses, etc). Our team is around 8 developers and 2 QA people on a single project (and 4-5 more people that would be using it for their independent project). It seems like RTC is targetted for larger teams, but our team is relatively small. Does anyone has experience using RTC in a smaller team? The project that would be using it is a .NET/WPF application, so we would be using primarily Visual Studio. Is the Visual Studio integration any good, or are we stuck having to have Eclipse open on top of Visual Studio? Personally, I have been using Bazaar as my personal source control (and checking out/into sourcesafe from a branch), as well as on personal projects. Does RTC incorporate features of "third generation" version control systems, such as first class branching/merging and changesets rather than file changes, and good visualization of where changes come from? Also, what are the general pros and cons for it?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >