Search Results

Search found 9296 results on 372 pages for 'scheduled task'.

Page 129/372 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • C#: A "Dumbed-Down" C++?

    - by James Michael Hare
    I was spending a lovely day this last weekend watching my sons play outside in one of the better weekends we've had here in Saint Louis for quite some time, and whilst watching them and making sure no limbs were broken or eyes poked out with sticks and other various potential injuries, I was perusing (in the correct sense of the word) this month's MSDN magazine to get a sense of the latest VS2010 features in both IDE and in languages. When I got to the back pages, I saw a wonderful article by David S. Platt entitled, "In Praise of Dumbing Down"  (msdn.microsoft.com/en-us/magazine/ee336129.aspx).  The title captivated me and I read it and found myself agreeing with it completely especially as it related to my first post on divorcing C++ as my favorite language. Unfortunately, as Mr. Platt mentions, the term dumbing-down has negative connotations, but is really and truly a good thing.  You are, in essence, taking something that is extremely complex and reducing it to something that is much easier to use and far less error prone.  Adding safeties to power tools and anti-kick mechanisms to chainsaws are in some sense "dumbing them down" to the common user -- but that also makes them safer and more accessible for the common user.  This was exactly my point with C++ and C#.  I did not mean to infer that C++ was not a useful or good language, but that in a very high percentage of cases, is too complex and error prone for the job at hand. Choosing the correct programming language for a job is a lot like choosing any other tool for a task.  For example: if I want to dig a French drain in my lawn, I can attempt to use a huge tractor-like backhoe and the job would be done far quicker than if I would dig it by hand.  I can't deny that the backhoe has the raw power and speed to perform.  But you also cannot deny that my chances of injury or chances of severing utility lines or other resources climb at an exponential rate inverse to the amount of training I may have on that machinery. Is C++ a powerful tool?  Oh yes, and it's great for those tasks where speed and performance are paramount.  But for most of us, it's the wrong tool.  And keep in mind, I say this even though I have 17 years of experience in using it and feel myself highly adept in utilizing its features both in the standard libraries, the STL, and in supplemental libraries such as BOOST.  Which, although greatly help with adding powerful features quickly, do very little to curb the relative dangers of the language. So, you may say, the fault is in the developer, that if the developer had some higher skills or if we only hired C++ experts this would not be an issue.  Now, I will concede there is some truth to this.  Obviously, the higher skilled C++ developers you hire the better the chance they will produce highly performant and error-free code.  However, what good is that to the average developer who cannot afford a full stable of C++ experts? That's my point with C#:  It's like a kinder, gentler C++.  It gives you nearly the same speed, and in many ways even more power than C++, and it gives you a much softer cushion for novices to fall against if they code less-than-optimally.  A bug is a bug, of course, in any language, but C# does a good job of hiding and taking on the task of handling almost all of the resource issues that make C++ so tricky.  For my money, C# is much more maintainable, more feature-rich, second only slightly in performance, faster to market, and -- last but not least -- safer and easier to use.  That's why, where I work, I much prefer to see the developers moving to C#.  The quantity of bugs is much lower, and we don't need to hire "experts" to achieve the same results since the language itself handles those resource pitfalls so prevalent in poorly written C++ code.  C++ will still have its place in the world, and I'm sure I'll still use it now and again where it is truly the correct tool for the job, but for nearly every other project C# is a wonderfully "dumbed-down" version of C++ -- in the very best sense -- and to me, that's the smart choice.

    Read the article

  • What is the SharePoint Action Framework and Why do I need it ?

    - by SAF
    For those out there that are a little curious as to whether SAF is any use to your organisation, please read this FAQ.  What is SAF ? SAF is free to use. SAF is the "SharePoint Action Framework", it was built by myself and Hugo (plus a few others along the way). SAF is written entirely in C# code available from : http://saf.codeplex.com.   SAF is a way to automate SharePoint configuration changes. An Action is a command/class/task/script written in C# that performs a unit of execution against SharePoint such as "CreateWeb"  or "AddLookupColumn". A SAF Macro is collection of one or more Actions. SAF Macro can be run from Msbuild, a Feature, StsAdm or common plain old .Net code. Parameters can be passed to a Macro at run-time from a variety of sources such as "Environment Variable", "*.config", "Msbuild Properties", Feature Properties, command line args, .net code. SAF emits lots of trace statements at run-time, these can be viewed using "DebugView". One Action can pass parameters to another Action. Parameters can be set using Expression Syntax such as "DateTime.Now".  You should consider SAF is you suffer from one of the following symptoms... "Our developers write lots of code to deploy changes at release time - it's always rushed" "I don't want my developers shelling out to Powershell or Stsadm from a Feature". "We have loads of Console applications now, I have lost track of where they are, or what they do" "We seem to be writing similar scripts against SharePoint in lots of ways, testing is hard". "My scripts often have lots of errors - they are done at the last minute". "When something goes wrong - I have no idea what went wrong or how to solve it". "Our Features get stuck and bomb out half way through - there no way to roll them back". "We have tons of Features now - I can't keep track". "We deploy Features to run one-off tasks" "We have a library of reusable scripts, but, we can only run it in one way, sometimes we want to run it from MSbuild and a Feature". "I want to automate the deployment of changes to our development environment". "I would like to run a housekeeping task on a scheduled basis"   So I like the sound of SAF - what's the problems ?  Realistically, there are few things that need to be considered: Someone on your team will need to spend a day or 2 understanding SAF and deciding exactly how you want to use it. I would suggest a Tech Lead, SysAdm or SP Architect will need to download it, try out the examples, look through the unit tests. Ask us questions. Although, SAF can be downloaded and set to go in a few minutes, you will still need to address issues such as - "Do you want to execute your Macros in MsBuild or from a Feature ?" You will need to decide who is going to do your deployments - is it each developer to themself, or do you require a dedicated Build Manager ? As most environments (Dev, QA, Live etc) require different settings (e.g. Urls, Database names, accounts etc), you will more than likely want to define these and set a properties file up for each environment. (These can then be injected into Saf at run-time). There may be no Action to solve your particular problem. If this is the case, suggest it to us - we can try and write it, or write it yourself. It's very easy to write a new Action - we have an approach to easily unit test it, document it and author it. For example, I wrote one to deploy  a WSP in 2 hours the other day. Alternatively, Saf can also call Stsadm commands and Powershell scripts.   Anyway, I do hope this helps! If you still need help, or a quick start, we can also offer consultancy around SAF. If you want to know more give us a call or drop an email to [email protected]

    Read the article

  • ArchBeat Link-o-Rama for October 14-20, 2012

    - by Bob Rhubart
    The Top 10 items shared on the OTN ArchBeat Facebook page for the week of October 14-21, 2012. Panel: On the Impact of Software | InfoQ Les Hatton (Oakwood Computing Associates), Clive King (Oracle), Paul Good (Shell), Mike Andrews (Microsoft) and Michiel van Genuchten (moderator) discuss the impact of software engineering on our lives in this panel discussion recorded at the Computer Society Software Experts Summit 2012. ResCare Solves Content Lifecycle Challenges with Oracle WebCenter Learn how ResCare solves content lifecycle challenges with Oracle WebCenter. Speakers: Joe Lichtefeld, VP of Application Services & PMO, ResCare Wayne Boerger, Product Manager, TEAM Informatics Doug Thompson, EVP Global Development, TEAM Informatics Date: Tuesday, October 30, 2012 Time: 10:00 a.m. PT / 1:00 p.m. ET WebLogic Server 11gR1 Interactive Quick Reference "The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark Rittman Oracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Mobile Apps for EBS | Capgemini Oracle Blog Capgemini solution architect Satish Iyer breifly describes how Oracle ADF and Oracle SOA Suite can be used to fill the gap in mobile applications for Oracle EBS. Introducing the New Face of Fusion Applications | Misha Vaughan Oracle ACE Directors Debra Lilly and Floyd Teter have already blogged about the the new face of Oracle Fusion Applications. Now Applications User Experience Architect Misha Vaughan shares a brief overview of how the Oracle Applications User Experience (UX) team developed the new look. BPM 11g - Dynamic Task Assignment with Multi-level Organization Units | Mark Foster "I've seen several requirements to have a more granular level of task assignment in BPM 11g based on some value in the data passed to the process," says Fusion Middleware A-Team architect Mark Foster. "Parametric Roles is normally the first port of call to try to satisfy this requirement, but in this blog we will show how a lot of use-cases can be satisfied by the easier to implement and flexible Organization Unit." OTN Architect Day Los Angeles - Oct 25 Oracle Technology Network Architect Day in Los Angeles happens in one week. Register now to make sure you don't miss out on a rich schedule of expert technical sessions and peer interaction covering the use of Oracle technologies in cloud computing, SOA, and more. Even better: it's all free. When: October 25, 2012, 8:30am - 5:00pm. Where: Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Oracle VM VirtualBox 4.2.2 released | Oracle's Virtualization Blog The Fat Bloke weighs in with a short post with information on where you can find information and the download for the latest VirtualBox release. Advanced Oracle SOA Suite #OOW 2012 SOA Presentations The Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. Thought for the Day "Software: do you write it like a book, grow it like a plant, accrete it like a pearl, or construct it like a building?" — Jeff Atwood Source: softwarequotes.com

    Read the article

  • How to Mentor a Junior Developer

    - by Josh Johnson
    This title is a little broad but I may need to give a little background before I can ask my question properly. I know that similar questions have been asked here already. But in my case I'm not asking if I should be mentoring someone or if the person is a good fit for being a software developer. That is not my place to judge. I have not been asked outright, but it is apparent that myself and other fellow senior developers are to mentor the new developers that start here. I have no problem with this whatsoever and, in many cases, it lends me a fresh perspective on things and I end up learning in the process. Also, I remember how beneficial it was in the beginning of my career when someone would take some time to teach me something. When I say "new developer" they could be anywhere from fresh out of college to having a year or two of experience. Recently and in the past we've had people start here who seem to have an attitude toward development/programming which is different from mine and hard for me to reconcile; they seem to extract just enough information to get the task done but not really learn from it. I find myself going over and over the same issues with them. I understand that part of this could be a personality thing, but I feel it's my job to do my best and sort of push them out of the nest while they're under my wing, so to speak. How can I impart just enough information so that they will learn but not give so much as to solve the problem for them? Or perhaps: What's the proper response to questions that are designed to take the path of least resistance and, in essence, force them to learn instead of take the easy way out? These questions are probably more general teaching questions and don't have that much to do specifically with software development. Note: I do not get a say in what tasks they are working on. Management doles the task out and it could be anything from a very simple bug fix to starting an entire application by themselves. While this is not ideal by any means and obviously presents its own gauntlet of challenges, I feel it's a topic best left for another question. So the best I can do is help them with the problem at hand and try to help them break it down into simpler problems and also check their commit logs and point out mistakes that they made. My main objectives are to: Help them out and give them the tools they need to start becoming more self-reliant. Steer them in the right direction and break bad development habits early on. Lessen the amount of time I spend with them (the personality type described above seems to need much more one-on-one time and does not do well over IM or email. While that's generally fine, I can't always stop what I'm working on, break my stride, and help them debug an error on a moments notice; I have my own projects that need to get done).

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-29

    - by Bob Rhubart
    Backward-compatible vs. forward-compatible: a tale of two clouds | William Vambenepe "There is the Cloud that provides value by requiring as few changes as possible. And there is the Cloud that provides value by raising the abstraction and operation level," says William Vambenepe. "The backward-compatible Cloud versus the forward-compatible Cloud." Vambenepe was a panelist on the recent ArchBeat podcast Public, Private, and Hybrid Clouds. Andrejus Baranovskis's Blog: ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) Oracle ACE Director Andrejus Baranovskis investigates "how you can customize a standard BPM Task Flow through MDS Seeded customization." Oracle OpenWorld 2012 Music Festival If, after a day spent in sessions at Oracle Openworld, you want nothing more than to head back to your hotel for a quiet evening spent responding to email, please ignore the rest of this message. Because every night from Sept 30 to Oct 4 the streets of San Francisco will pulsate with music from a vast array of bands representing more musical styles than a single human brain an comprehend. It's the first ever Oracle Music Festival, baby, 7:00pm to 1:00am every night. Are those emails that important...? Resource Kit: Oracle Exadata - includes demos, videos, product datasheets, and technical white papers. This free resource kit includes several customer case study videos, two 3D product demos, several product datasheets, and three technical architecture white papers. Registration is required for the who don't already have a free Oracle.com membership account. Some execs contemplate making 'Bring Your Own Device' mandatory | ZDNet "Companies and agencies are recognizing that individual employees are doing a better job of handling and managing their devices than their harried and overworked IT departments – who need to focus on bigger priorities, such as analytics and cloud," says ZDNet SOA blogger Joe McKendrick. Podcast Show Notes: Public, Private, and Hybrid Clouds All three parts of this discussion are now available. Featuring a panel of leading Oracle cloud computing experts, including Dr. James Baty, Mark T. Nelson, Ajay Srivastava, and William Vambenepe, the discussion covers an overview of the various flavors of cloud computing, the importance of standards, Why cloud computing is a paradigm shift—and why it isn't, and advice on what architects need to know to take advantage of the cloud. And for those who prefer reading to listening, a complete transcript is also available. Amazon AMIs and Oracle VM templates (Cloud Migrations) Cloud migration expert Tom Laszewski shares an objective comparison of these two resources. IOUC : Blogs : Read the latest news on the global user group community - June 2012! The June 2012 edition of "Are You a Member Yet?"—the quarterly newsletter about Oracle user group communities around the world. Webcast: Introducing Identity Management 11g R2 - July 19 Date: Thursday, July 19, 2012 Time: 10am PT / 1pm ET Please join Oracle and customer executives for the launch of Oracle Identity Management 11g R2, the breakthrough technology that dramatically expands the reach of identity management to cloud and mobile environments. Thought for the Day "The most important single aspect of software development is to be clear about what you are trying to build." — Bjarne Stroustrup Source: SoftwareQuotes.com

    Read the article

  • Big Data – Interacting with Hadoop – What is Sqoop? – What is Zookeeper? – Day 17 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Pig and Pig Latin in Big Data Story. In this article we will understand what is Sqoop and Zookeeper in Big Data Story. There are two most important components one should learn when learning about interacting with Hadoop – Sqoop and Zookper. What is Sqoop? Most of the business stores their data in RDBMS as well as other data warehouse solutions. They need a way to move data to the Hadoop system to do various processing and return it back to RDBMS from Hadoop system. The data movement can happen in real time or at various intervals in bulk. We need a tool which can help us move this data from SQL to Hadoop and from Hadoop to SQL. Sqoop (SQL to Hadoop) is such a tool which extract data from non-Hadoop data sources and transform them into the format which Hadoop can use it and later it loads them into HDFS. Essentially it is ETL tool where it Extracts, Transform and Load from SQL to Hadoop. The best part is that it also does extract data from Hadoop and loads them to Non-SQL (or RDBMS) data stores. Essentially, Sqoop is a command line tool which does SQL to Hadoop and Hadoop to SQL. It is a command line interpreter. It creates MapReduce job behinds the scene to import data from an external database to HDFS. It is very effective and easy to learn tool for nonprogrammers. What is Zookeeper? ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. In other words Zookeeper is a replicated synchronization service with eventual consistency. In simpler words – in Hadoop cluster there are many different nodes and one node is master. Let us assume that master node fails due to any reason. In this case, the role of the master node has to be transferred to a different node. The main role of the master node is managing the writers as that task requires persistence in order of writing. In this kind of scenario Zookeeper will assign new master node and make sure that Hadoop cluster performs without any glitch. Zookeeper is the Hadoop’s method of coordinating all the elements of these distributed systems. Here are few of the tasks which Zookeepr is responsible for. Zookeeper manages the entire workflow of starting and stopping various nodes in the Hadoop’s cluster. In Hadoop cluster when any processes need certain configuration to complete the task. Zookeeper makes sure that certain node gets necessary configuration consistently. In case of the master node fails, Zookeepr can assign new master node and make sure cluster works as expected. There many other tasks Zookeeper performance when it is about Hadoop cluster and communication. Basically without the help of Zookeeper it is not possible to design any new fault tolerant distributed application. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Big Data Analytics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Moving from Silverlight 4 Beta to RC - Part 1

    The other day I had finished up my Task-It Webinar, written a few blog posts, and knew the time had come to move from my Silverlight 4 Beta environment up to the latest RC (release candidate) bits that were released last Monday. What disappointed me when I went to the Silverlight 4 Information Page is that it told me what to install, but not what to uninstall first. Uninstalling I'm not entirely sure if I had to uninstall anything, or if installing the new stuff would just work, but in poking around the web I found posts stating that you must uninstall the following items first. Unfortunately I'm going by memory here and have not been able to find my way back to the magic page I found in the myriad of posts that I went through: Microsoft VisualStudio Beta 2 Microsoft .NET Framework 4 Extended (apparently this must be done *before* the next one) Microsoft .NET Framework Client Profile Microsoft Silverlight 4 Tools for Visual Studio 2010 WCF RIA Services Preview for Visual Studio 2010 While I was at it, I removed a bunch of other stuff, like Blend 3, Blend 4, the SDK's associated with them, and a bunch other stuff that was old. Of course, I didn't really want/need to keep any Silverlight 3 stuff around as I am developing Task-It in Silverlight 4. If I need a Silverlight 3 environment at some point I'll set it up in a virtual environment. NOTE: One thing that I did not uninstall is the Microsoft Silverlight 4 Toolkit November 2009. The reason is that they have not released the March version yet, so if you uninstall this, youll end up having to reinstall it. Installing OK, now that I had all of that old stuff off my machine, now it was time to get the new stuff. For this part I liked Tim Heuer's post, A Guide to What has Changed in Silverlight 4 RC better than the Silverlight 4 Information Page. VisualStudio 2010 RC - I installed Ultimate, but you may not need that version. No harm, it's free for now anyway. I downloaded the .exe and the 3 .rar files, then ran the .exe. I then extracted the contents of the ISO (using WinZip) to a new directory, and now had Setup.exe, a bunch of .cab files and some other assorted stuff. I simply ran Setup.exe and chose custom install (only because I wanted to uncheck Visual C++...I don't really have a need for that) Silverlight 4 Tools for Visual Studio 2010 - As mentioned in Tim's blog post, this installs this installs Silverlight developer runtime, SDK, tools, and this installs Silverlight developer runtime, SDK, tools, and WCF RIA Services. WCF RIA Services Toolkit March 2010 - I'm not sure if/when I'll need any of this stuff, but no harm in installing it anyway. Expression Blend 4 beta - Only if you plan to use Blend, which I do. Windows Phone Developer Tools - Only if you are interested in playing with Windows Phone 7 development. Wrap Up Hopefully I got those steps right. If anyone finds anything I've missed, please just add a comment to this post and I'll update it accordingly.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Session Sharing with another User on *NIX and Windows

    - by Giri Mandalika
    Oracle Solaris Since Solaris is not widely known for its graphical interface, let's just focus on sharing a terminal session in read-only mode with another user on the same system. Here is an example. eg., % finger Login Name TTY Idle When Where root Super-User pts/1 Sat 16:57 dhcp-amer-vpn-rmdc-a sunperf ??? pts/2 4 Sat 16:41 pitcher.sfbay.sun.com In this example, two users root and sunperf are connected to the same system from two different terminals pts/1 and pts/2 respectively. If the root user wants to show something to sunperf user -- what s/he is doing in her/his terminal, for example, it can be accomplished with the following command. script -a /dev/null | tee -a <target_terminal eg., # script -a /dev/null | tee -a /dev/pts/2 Script started, file is /dev/null # # uptime 5:04pm up 1 day(s), 2:56, 2 users, load average: 0.81, 0.81, 0.81 # # isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32 # # exit Script done, file is /dev/null After the script .. | tee .. command, sunperf user should be able to see the root user's stdin and stdout contents in her/his own terminal until the script session exits in root user's terminal. Since this kind of sharing is based on capturing and redirecting the contents to the target terminal, the users on the receiving end won't be able to see whatever is being edited on initiators' terminal [using editors such as vi]. Also it is not possible to share the session with any connected user on the system unless the initiator has the necessary permissions and privileges. The script utility records everything printed in a terminal session, while the tee utility replicates the contents of the screen capture on to the standard output of the target terimal. The tee utility does not buffer the output - so, the screen capture from the initiators' terminal appears almost right away in the target terminal. Though I never tested, this technique may work on all *NIX and Linux flavors with little or no changes. Also there might be other ways to accomplish this. [Thanks to Sujeet for sharing this tip] Microsoft Windows Most of the Windows users may rely on VNC services to share a desktop session. Another way to share the desktop session is to use the Remote Desktop Connection (RDC) client. Here are the steps. Connect to the target Windows system using Remote Desktop Connection client Launch Windows Task Manager Navigate to the "Users" tab Find the user session that you want to connect to and have full control over as the other user who is currently holding that session Select the user name in Windows Task Manager, right click and choose the option "Remote Control" A window pops up on the other user's session with the message "<USER is requesting to control your session remotely. Do you accept the request?" Once the other user says "Yes", you will be granted access to that session. Since then both users should be able to see the same screen and even control the session from their respective workstations.

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • Get Ready for Anytime, Anywhere Engagement

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Are you ready for 2015?  According to IDC, 2015 is the year when more users are projected to access the internet using mobile devices than with PC’s or other wired devices.  It’s no doubt that mobile devices are a critical means of communication today, and are on track to become increasingly more important in the coming years. However, device formats are so varied that delivering a mobile web experience that will engage site visitors and enhance your brand can be a daunting task. Solutions that empower organizations to easily extend their web presence to the mobile channel, while saving significant time and effort in managing mobile sites, are now essential in our ever connected mobile world. So what are some of the things organizations should look for in such a solution? Mobile device form factors, networks, protocols, and browsers vary widely, and reformatting web content for thousands of different device and software combinations is a prohibitive task. An effective mobile solution can make this process seamless by automatically formatting designated web content for mobile delivery.  By automatically detecting a site visitor’s device configuration, the selected web content can be sized and formatted for optimal display on that particular device. This can save tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. It’s not enough to simply support the thousands of different mobile device types that are out there. It’s also critical to make it easy for marketers and other business users to manage mobile sites and mobile content. Those responsible for maintaining an organization’s web and mobile experiences need the ability to edit content using rich text editor tools and then preview that content directly in the context of the mobile website and the traditional website, ideally from the same business user interface. Powerful capabilities such as these make managing the web experience for mobile devices easy, even with frequently changing content, across a multitude of different devices. This saves tremendous time involved in building, formatting, and maintaining individual websites or mobile applications for different mobile devices. When content or business needs change, the business user needs only to change site content once, and it is seamlessly deployed to the web and all mobile channels.Geo-location is another critical input to making the online experience engaging and relevant for web visitors who are increasingly mobile. A mobile solution should enable use of device GPS data to deliver location-based content and services to mobile website visitors. Organizations can provide mobile site visitors with location-sensitive search results, location-based offers and recommendations, integration of maps and directions into site content, and much more – all critical for meeting the needs of those on the go.To hear more about how mobile is changing the game, check out our recent webcast with Ted Schadler, Vice President, Principal Analyst, Forrester, where he discussed why mobile is the new face of engagement, or learn more about how to extend your web presence to the mobile channel with Oracle WebCenter Sites and Oracle WebCenter Sites Mobility Server.

    Read the article

  • Move over DFS and Robocopy, here is SyncToy!

    - by andywe
    Ever since Windows 2000, I have always had the need to replicate data to multiple endpoints with the same content. Until DFS was introduced, the method of thinking was to either manually copy the data location by location, or to batch script it with xcopy and schedule a task. Even though this worked (and still does today), it was cumbersome, and intensive on the network, especially when dealing with larger amounts of data. Then along came robocopy, as an internal tool written by an enterprising programmer at Microsoft. We used it quite a bit, especially when we could not use DFS in the early days. It was received so well, it made it into the public realm. At least now we could have the ability to determine what files had changed and only replicate those. Well, over time there has been evolution of this ideal. DFS is obviously the Windows enterprise class service to do this, along with BrancheCache..however you don’t always need or want the power of DFS, especially when it comes to small datacenter installations, or remote offices. I have specific data sets that are on closed or restricted networks, that either have a security need for this, or are in remote countries where bandwidth is a premium. FOr this, I use the latest evolution for one off replication names Synctoy. Synctoy is from Microsoft, seemingly released in 2009, that wraps a nice GUI around setting up a paired set of folders (remember the mobile briefcase from Windows 98?), and allowing you the choice of synchronization methods. 1 way, or 2 way. Simply create a paired set of folders on the source and destination, choose your options for content, exclude any file types you don’t want to replicate, and click run. Scheduling is even easier. MS has included a wrapper for doing just this so all you enter in your task schedule in the SynToyCMD.exe, a –R as an argument, and the time schedule. No more complicated command lines or scripts.   I find this especially useful when I use MS backup to back up a system volume, but only want subsets of backup information of a data share and ONLY when that dataset has changed. Not relying on full backups and incremental. An example of this is my application installation master share. I back this up with SyncToy because I do not need multiple backup copies..one copy elsewhere suffices to back it up. At home, very useful for your pictures, videos, music, ect..the backup is online and ready to access, not waiting for you to restore a backup file, and no need to institute a domain simply to have DFS.'   Do note there is a risk..if you accidently delete a file and do not catch this before the next sync, then depending on your SyncToy settings, you can indeed lose that file as the destination updates..so due diligence applies. I make it a rule to sync manly one way…I use my master share for making changes, and allow the schedule to follow suit. Any real important file I lock down as read only through file permissions so it cannot be deleted unless I intervene.   Check out the tool and have some fun! http://www.microsoft.com/en-us/download/details.aspx?DisplayLang=en&id=15155

    Read the article

  • Some PowerShell goodness

    - by KyleBurns
    Ever work somewhere where processes dump files into folders to maintain an archive?  Me too and Windows Explorer hates it.  Very often I find myself needing to organize these files into subfolders so that I can go after files without locking up Windows Explorer and my answer used to be to write a program in something like C# to do the job.  These programs will typically enumerate the files in a folder and move each file to a subdirectory named based on a datestamp.  The last such program I wrote had to use lower-level Win32 API calls to perform the enumeration because it appears the standard .Net calls make use of the same method of enumerating the directories that Windows Explorer chokes on when dealing with a large number of entries in a particular directory, so a simple task was accomplished with a lot of code. Of course, this little utility was just something I used to make my life easier and "not a production app", so it was in my local source folder and not source control when my hard drive died.  So... I was getting ready to re-create it and thought it might be a good idea to play with PowerShell a bit - something I had been wanting to do but had not yet met a requirement to make me do it.  The resulting script was amazingly succinct and even building the flexibility for parameterization and adding line breaks for readability was only about 25 lines long.  Here's the code with discussion following: param(     [Parameter(         Mandatory = $false,         Position = 0,         HelpMessage = "Root of the folders or share to archive.  Be sure to end with appropriate path separator"     )]     [String] $folderRoot="\\fileServer\pathToFolderWithLotsOfFiles\",       [Parameter(         Mandatory = $false,         Position = 1     )]     [int] $days = 1 ) dir $folderRoot|?{(!($_.PsIsContainer)) -and ((get-date) - $_.lastwritetime).totaldays -gt $days }|%{     [string]$year=$([string]$_.lastwritetime.year)     [string]$month=$_.lastwritetime.month     [string]$day=$_.lastwritetime.day     $dir=$folderRoot+$year+"\"+$month+"\"+$day     if(!(test-path $dir)){         new-item -type container $dir     }     Write-output $_     move-item $_.fullname $dir } The script starts by declaring two parameters.  The first parameter holds the path to the folder that I am going to be sorting into subdirectories.  The path separator is intended to be included in this argument because I didn't want to mess with determining whether this was local or UNC and picking the right separator in code, but this could be easily improved upon using Path.Combine since PowerShell has access to the full framework libraries.  The second parameter holds a minimum age in days for files to be removed from the root folder.  The script then pipes the dir command through a query to include only files (by excluding containers) and of those, only entries that meet the age requirement based on the last modified datestamp.  For each of those, the datestamp is used to construct a folder name in the format YYYY\MM\DD (if you're in an environment where even a day's worth of files need further divided, you could make this more granular) and the folder is created if it does not yet exist.  Finally, the file is moved into the directory. One of the things that was really cool about using PowerShell for this task is that the new-item command is smart enough to create the entire subdirectory structure with a single call.  In previous code that I have written to do this kind of thing, I would have to test the entire tree leading down to the subfolder I want, leading to a lot of branching code that detracted from being able to quickly look at the code and understand the job it performs. Overall, I have to say I'm really pleased with what has been done making PowerShell powerful and useful.

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Review: Windows 8 - Initial Experience

    - by Tim Murphy
    I originally started this post when I had the Windows 8 preview setup on VirtualBox image.  I have since put the RTM bits on a Dell E6530 that is my new work laptop.  It isn’t a table so I am not getting the touch experience, but as a developer this makes the most sense for the moment. This is the first Windows OS that I have had to spend much time exploring to even get started.  The first thing I ran into was when I clicked on the desktop icon I was lost.  Where is the Start menu? Where are my programs?  How do I get back to the Metro environment?  I finally tried hitting the Windows button and it popped back out to the Metro screen. Once I got past that I found that the look of the Metro interface is clean and well organized.  It should be familiar to anyone who is already using a Zune or Windows Phone 7.  In the Desktop, aside from the lack of the Start button to bring up programs the desktop is just like the Windows 7 environment we are all used to.  I do have to say though that I don’t like popping out to the Metro screen to find program.  I think installers for programs like ones that developers usually work in for a desktop mode will need to give an option for creating a desktop icon and pinning to the task bar of the desktop. One of the things I do really enjoy is having live tiles in the Metro environment.  It is a nice way of feeding my need for constant information.  The one drawback though is that the task bar at the bottom of my screen used to be where I got this information without leaving what I was working on.  It allowed me to see current temperatures and when there were messages waiting.  I have since found that these still work as expected in the Desktop and Toast message keep you up on what is going on in the Metro apps. Thankfully familiar functionality like Alt-Tab and Windows-Tab still work regardless of if apps are in the Metro or the Desktop environment.  Add to this the ability to find any application on the Metro screen by simply typing and things get very comfortable. I also started exploring some of the apps.  If you want see a ton of stats on your team at a glance check out the Sports app.  What games are coming up? Who are the leaders in a number of stats?  The Weather and Finance apps have good features as well and I am sure they will improve as users supply feedback. I have had to install Visual Studio 2010 side-by-side with VS2012 because the Windows Phone 7 SDK would only install on VS2010.  This isn’t a Windows 8 issue per se, but something that you need to be aware of if you are a developer moving to the new ecosystem. The overall experience is a joy despite a few hiccups.  For anyone moving to Windows 8 in on a non-touch laptop or desktop I do suggest this list of keyboard shortcuts.  Enjoy. del.icio.us Tags: Windows 8,Win8,Metro,Review

    Read the article

  • Confused about modifying the sprint backlog during a sprint

    - by Maltiriel
    I've been reading a lot about scrum lately, and I've found what seem to me to be conflicting information about whether or not it's ok to change the sprint backlog during a sprint. The Wikipedia article on scrum says it's not ok, and various other articles say this as well. Also my Software Development professor taught the same thing during an overview of scrum. However, I read Scrum and XP from the Trenches and that describes a section for unplanned items on the taskboard. So then I looked up the Scrum Guide and it says that during the sprint "No changes are made that would affect the Sprint Goal" and in the discussion of the Sprint Goal "If the work turns out to be different than the Development Team expected, then they collaborate with the Product Owner to negotiate the scope of Sprint Backlog within the Sprint." It goes on to say in the discussion of the Sprint Backlog: The Sprint Backlog is a plan with enough detail that changes in progress can be understood in the Daily Scrum. The Development Team modifies Sprint Backlog throughout the Sprint, and the Sprint Backlog emerges during the Sprint. This emergence occurs as the Development Team works through the plan and learns more about the work needed to achieve the Sprint Goal. As new work is required, the Development Team adds it to the Sprint Backlog. As work is performed or completed, the estimated remaining work is updated. When elements of the plan are deemed unnecessary, they are removed. Only the Development Team can change its Sprint Backlog during a Sprint. The Sprint Backlog is a highly visible, real-time picture of the work that the Development Team plans to accomplish during the Sprint, and it belongs solely to the Development Team. So at this point I'm altogether confused. Thinking about it, it makes more sense to me to take the second approach. The individual, specific items in the backlog don't seem to me to be the most important thing, but rather the sprint goal, so not changing the sprint goal but being able to change the backlog makes sense. For instance if both the product owner and the team thought they were on the same page about a story, but as the sprint progressed they figured out there was a misunderstanding, it seems like it makes sense to change the tasks that make up that story accordingly. Or if there was some story or task that was forgotten about, but is required to reach the sprint goal, I would think it would be best to add the story or task to the backlog during the sprint. However, there are a lot of people who seem quite adamant that any change to the sprint backlog is not ok. Am I misunderstanding that position somehow? Are those folks defining the sprint backlog differently somehow? My understanding of the sprint backlog is that it consists of both the stories and the tasks they're broken down into. Anyway I would really appreciate input on this issue. I'm trying to figure out both what the idealistic scrum approach is to changing the sprint backlog during a sprint, and whether people who use scrum successfully for development allow changing the sprint backlog during a sprint.

    Read the article

  • MSDeploy doesn't deploy to remote server using MSBuild and Visual Studio 2010

    - by user317762
    I'm currently running Visual Studio Team System 2010 RC and I'm trying to get the Build Service setup to build my solution and deploy 3 web applications in it. I've created a custom build configuration called Integration and I've setup the "IIS Web site/application name to use on the destination server" on the Package/Publish tab of the Properties for each of the web applications. In my Build Definition I've set the following arguments: /p:DeployOnBuild=True /p:DeployTarget=MSDeployPublish /p:MSDeployPublishMethod=InProc /p:MsDeployServiceUrl=http://my-server-name:8172/msdeploy.axd /p:EnablePackageProcessLoggingAndAssert=True However, when I run the build I get the following error, for all three web applications: Updating setAcl (RightContent). C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets(3481,5): error : Web deployment task failed. (Attempted to perform an unauthorized operation.) I don't think this is my actual problem though. This error is occuring after the following entry in the log: Updating setAcl This is what's causing the error message, but it appears that MSDeploy is trying to deploy to the local IIS on the Build server, not the server I specified with the MsDeployServiceUrl parameter. After looking at the targets file at C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets, I added the EnablePackageProcessLoggingAndAssert, which adds extra logging. The log shows an emptry string for the value of MsDeployServiceUrl. I also noticed in the target that MsDeployServiceUrl has a lowercase s, which is somewhat confusing because the task name MSDeployPublish has an uppercase S. I tried using it using uppercase, then again using lowercase, but neither worked. A couple other things to note: My build service is running as NETWORK SERVICE. The server I'm trying to deploy to is on another domain. I also tried adding /p:username=mydomain\myusername /p:password=mypassword to the MSBuild paramter list, but that didn't help. Does anyone know if I'm supplying the correct parameters? Or provide me with the correct ones? Thanks

    Read the article

  • Localize WiX installer which uses the Firewall extension

    - by tronda
    I've got a WiX installer project which uses MSBuild to generate the MSI file. The WXS file includes the WiX firewall extension: xmlns:fire="http://schemas.microsoft.com/wix/FirewallExtension" I've defined two cultures in the MSBuild file with the following definition: <PropertyGroup> ... <Cultures>en-us;no-no</Cultures> </PropertyGroup> I've also added the translated resources: <ItemGroup> <EmbeddedResource Include="lang\Firewall_no-no.wxl" /> <EmbeddedResource Include="lang\WixUI_no-no.wxl" /> </ItemGroup> These represents translation to Norwegian for the Firewall extension and the WixUI extension. When I run the build it succeeds with the en-us part, but the no-no part fails with the following error messages: C:\delivery\Dev\wix30_public\src\ext\FirewallExtension\wixlib\FirewallExtension.wxs(19): error LGHT0102: The localization variable !(loc.WixSchedFirewallExceptionsInstall) is unknown. Please ensure the variable is defined. .... Couple of issues: I don't know where the C:\delivery directory comes from. I don't have such a directory. The localization variables referenced in the error message have been translated in the Firewall_no-no.wxl file. When I run MSBuild with more detailed information I see the following output right before the error message: Task "Light" Command: C:\Program Files (x86)\Windows Installer XML v3\bin\Light.exe -cultures:no-no -ext "C:\Program Files (x86)\Windows Installer XML v3\bin\WixUIExtension.dll" -ext "C:\Program Files (x86)\Windows I nstaller XML v3\bin\WixUtilExtension.dll" -ext "C:\Program Files (x86)\Windows Installer XML v3\bin\WixFirewallExtension.dll" -loc lang\Firewall_no-no.wxl -loc lang\WixUI_no-no.wxl -out F:\Projects\MyProd\MyProj\Installer\bin\Debug\no-no\MyInstaller.msi -pdbout F:\Projects\MyProd\MyProj\Installer\bin\Debug\no-no\MyInstaller.wixpdb obj\Debug\MyProj.wixobj As the details show, the MSBuild task results in having two -loc parameters to the Light executable. Not sure if that would be the reason for this problem. Any ideas on how to solve this?

    Read the article

  • Rewind request body stream

    - by Despertar
    I am re-implementing a request logger as Owin Middleware which logs the request url and body of all incoming requests. I am able to read the body, but if I do the body parameter in my controller is null. I'm guessing it's null because the stream position is at the end so there is nothing left to read when it tries to deserialize the body. I had a similar issue in a previous version of Web API but was able to set the Stream position back to 0. This particular stream throws a This stream does not support seek operations exception. In the most recent version of Web API 2.0 I could call Request.HttpContent.ReadAsStringAsync()inside my request logger, and the body would still arrive to the controller in tact. How can I rewind the stream after reading it? or How can I read the request body without consuming it? public class RequestLoggerMiddleware : OwinMiddleware { public RequestLoggerMiddleware(OwinMiddleware next) : base(next) { } public override Task Invoke(IOwinContext context) { return Task.Run(() => { string body = new StreamReader(context.Request.Body).ReadToEnd(); // log body context.Request.Body.Position = 0; // cannot set stream position back to 0 Console.WriteLine(context.Request.Body.CanSeek); // prints false this.Next.Invoke(context); }); } } public class SampleController : ApiController { public void Post(ModelClass body) { // body is now null if the middleware reads it } }

    Read the article

  • WCF Ria Services Error : Load operation failed for query 'GetTranslationProgress'

    - by Manoj
    Hello, I am using WCF Ria Services beta in my silverlight application. I have a long running task on the server which keeps writing its status to a database table. I have a "GetTranslationProgress" Ria Service Query method which queries the state to get the status. This query method is called continuously at a regular interval of 5 seconds until a status = Finished or Error is retrieved. In some cases if the task on the server is running for a long duration 2-3 minutes then I receives this error:- Load operation failed for query 'GetTranslationProgress'. The server did not provide a meaningful reply; this might be caused by a contract mismatch, a premature session shutdown or an internal server error. at System.Windows.Ria.OperationBase.Complete(Exception error) at System.Windows.Ria.LoadOperation.Complete(Exception error) at System.Windows.Ria.DomainContext.CompleteLoad(IAsyncResult asyncResult) at System.Windows.Ria.DomainContext.<c_DisplayClass17.b_13(Object ) The error occurs intermittently and I have no idea what could be causing this error. I have checked most of the forums and sites but I have not been able to find out any solution to this issue. Please help.

    Read the article

  • How to change ErrorMessage property of the DataAnnotation validation in MVC2.0

    - by Raj Aththanayake
    My task is to change the ErrorMessage property of the DataAnnotation validation attribute in MVC2.0. For example I should be able to pass an ID instead of the actual error message for the Model property and use that ID to retrieve some content(error message) from a another service e.g database, and display that error message in the View instead of the ID. In order to do this I need to set the DataAnnotation validation attribute’s ErrorMessage property. [StringLength(2, ErrorMessage = "EmailContentID.")] [DataType(DataType.EmailAddress)] public string Email { get; set; } It seems like an easy task by just overriding the DataAnnotationsModelValidatorProvider ‘s protected override IEnumerable GetValidators(ModelMetadata metadata, ControllerContext context, IEnumerable attributes) However it seems to be a complicated enough. a. MVC DatannotationsModelValidator’s ErrorMessage property is read only. So I cannot set anything here b. System.ComponentModel.DataAnnotationErrorMessage property(get and set) which is already set in MVC DatannotationsModelValidator so we cannot set again. If you try to set you get “The property cannot set more than once…” error message appears. public class CustomDataAnnotationProvider : DataAnnotationsModelValidatorProvider { protected override IEnumerable GetValidators(ModelMetadata metadata, ControllerContext context, IEnumerable attributes) { IEnumerable validators = base.GetValidators(metadata, context, attributes); foreach (ValidationAttribute validator in validators.OfType<ValidationAttribute>()) { validator.ErrorMessage = "Error string from DB"; } //...... } Can anyone please help me on this?

    Read the article

  • Exchange Web Services, try to use ExchangeImpersonationType ...

    - by howmanytimes
    I am trying to use EWS, first time trying to use the ExchangeServiceBinding. The code I am using is below: _service = new ExchangeServiceBinding(); //_service.Credentials = new NetworkCredential(userName, userPassword, this.Domain); _service.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials; _service.Url = this.ServiceURL; ExchangeImpersonationType ei = new ExchangeImpersonationType(); ConnectingSIDType sid = new ConnectingSIDType(); sid.PrimarySmtpAddress = this.ExchangeAccount; ei.ConnectingSID = sid; _service.ExchangeImpersonation = ei; The application is an aspnet 3.5 trying to create a task using EWS. I have tried to use impersonation because I will not know the logon user's domain password, so I thought impersonation would be the best fit. Any thoughts on how I can utilize impersonation? Am I setting this correctly, I get an error while trying to run my application. I also tried without impersonation just to try to see if I can create a task, no luck either. Any help would be appreciated. Thanks.

    Read the article

  • Integrating PartCover.NET with NAnt

    - by davandries
    Hello, I'm trying to integrate PartCover.NET with NAnt and CruiseControl.NET I can run PartCover.NET browser without problems, but it does not work once I try to run it in an NAnt task (in my CCNET build). There must be an issue with my NAnt target but I can't find it. Maybe someone had experienced the same issues in the past?. <target name="CoverageUnitTest" description="Code coverage of unit tests"> <exec program="${PartCover.exe}"> <arg value="--target=${NUnit.console}" /> <arg value="--target-work-dir=${project.dir}\bin\${configuration}"/> <arg value="--target-args=${project}.dll" /> <arg value="--output=C:\partcover.xml" /> <arg value="--include=[*]*" /> </exec> </target> In CruiseControl, I got the following error message: [exec] Invalid option '--target C:\NUnit\bin\nunit-console.exe' Build Error: NAnt.Core.BuildException External Program Failed: C:\PartCover\PartCover.exe (return code was -1) in C:\default.build line: 20 col: 4 at NAnt.Core.Tasks.ExternalProgramBase.ExecuteTask() at NAnt.Core.Tasks.ExecTask.ExecuteTask() at NAnt.Core.Task.Execute() at NAnt.Core.Target.Execute() at NAnt.Core.Project.Execute(String targetName, Boolean forceDependencies) at NAnt.Core.Project.Execute() at NAnt.Core.Project.Run() thanks ! David

    Read the article

  • Trouble using ‘this.files’ in GruntJS Tasks

    - by whomba
    Background: So I’m new to writing grunt tasks, so hopefully this isn’t a stupid question. Purpose of my grunt task: Given a file (html / jsp / php / etc) search all the link / script tags, compare the href / src to a key, and replace with the contents of the file. Proposed task configuration: myTask:{ index:{ files:{src:"test/testcode/index.html", dest:"test/testcode/index.html"}, css:[ { file: "/css/some/css/file.css", fileToReplace:"/target/css/file.css" }, { file: "/css/some/css/file2.css", fileToReplace:"/target/css/file2.css" } ], js:[ { file: "/css/some/css/file.css", fileToReplace:”/target/css/file.css" }, { file: "/css/some/css/file2.css", fileToReplace:"/target/css/file2.css" } ] } } Problem: Now, per my understanding, when dealing with files, you should reference the ‘this.files’ object because it handles a bunch of nice things for you, correct? Using intelliJ and the debugger, I see the following when i watch the ‘this.files’ object: http://i.imgur.com/Gj6iANo.png What I would expect to see is src and dest to be the same, not dest ===‘src’ and src === undefined. So, What is it that I am missing? Is there some documentation on ‘this.files’ I can read about? I’ve tried setting the files attribute in the target to a number of other formats per grunts spec, none seem to work (for this script, ideally the src / dest would be the same, so the user would just have to enter it once, but i’m not even getting in to that right now.) Thanks for the help

    Read the article

  • What are the reasons to use dos batch programs in Windows?

    - by DVK
    Question What would be a good (ideally, technical) reason to ever program some non-trivial task in dos batch language on a modern Windows system as opposed to downloading either PowerShell, or ActiveState Perl? To be more specific, I make the following two assumptions for the duration of this question: anyone technical enough to be able to write a medium-complexity batch script is technical enough to install either of the scripting interpreters. Neither of those two present enough of a learning curve for basic batch replacement tasks that said curve would outweigh the pain of doing any remotely-non-trivial task in batch. Notes "You need a batch program for autoexec.bat" is not a valid reason. Your autoexec.bat may consist of simply calling non-batch script. If you disagree with either of my 2 assumptions above, that's fine, and I may be wrong. But my question is specifically "assuming those 2 assumptions are correct, what would be the reason to still stick with batch?" If it makes it easier to suspend disbelief (in case you disagree with me), add in a 3rd assumption that the question is limited to people who already posess at least some modicum of PowerShell or Perl experience. To re-iterate - this is not meant to be a subjective question about how easy it is to learn PSh or ASPerl compared to doing advanced batch coding. That is a separate question that is too subjective to be bothered with in this post. Background: I used to do some fairly complicated batch programming back in the elder days, and remember batch as one of the worst possble programming languages I had encountered. The idea for this question came after seeing a bunch of batch questions on SO, and trying to grok the answer of one of them out of sheer curiosity and giving up in pain after a minute, exclaiming mentally "why would anyone go through this pain instead of doing that in 1 line of Perl?" :) My own plausible answer I assume there may be an an likely DOS-compatible system, which has DOS interpreter but has no compatible PowerShell or Perl... I'm not aware of one but not completely impossible.

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >