Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 540/1022 | < Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >

  • Spring to Java EE, Part Three - new tech article on otn/java

    - by Janice J. Heiss
    In a new article up on otn/java, Java EE expert David Heffelfinger continues his series exploring the relative strengths and weaknesses of Java EE and Spring. Here, he demonstrates how easy it is to develop the data layer of an application using Java EE, JPA, and the NetBeans IDE instead of the Spring Framework.In the first two parts of the series, he generated a complete Java EE application by using JavaServer Faces (JSF) 2.0, Enterprise JavaBeans (EJB) 3.1, and Java Persistence API (JPA) 2.0 from Spring’s Pet Clinic MySQL schema, thus showing how easy it is to develop an application whose functionality equaled that of the Spring sample application.In his new article, Heffelfinger tweaks the application to make it more user friendly.From the article:“The generated application displays primary keys on some of the pages, and these keys are surrogate primary keys—meaning that they have no business value and are used strictly as a unique identifier—so there is no reason why they should be visible to the user. In addition, we will modify some of the generated labels to make them more user-friendly.”He concludes the article with a summary:“The Java EE version of the application is not a straight port of the Spring version. For example, the Java EE version enables us to create, update, and delete veterinarians as well as veterinary specialties, whereas the Spring version of the application enables us only to view veterinarians and specialties. Additionally, the Spring version has a single page for managing/viewing owners, pets, and visits, whereas the Java EE version of the application has separate pages for each of these entities.The other thing we should keep in mind is that we didn’t actually write a lot of the code and markup for the Java EE version of the application, because the bulk of it was generated by the NetBeans wizard.” Have a look at the complete article here.

    Read the article

  • Collision detection in 3D space

    - by dreta
    I've got to write, what can be summed up as, a compelte 3D game from scratch this semester. Up untill now i have only programmed 2D games in my spare time, the transition doesn't seem tough, the game's simple. The only issue i have is collision detection. The only thing i could find was AABB, bounding spheres or recommendations of various physics engines. I have to program a submarine that's going to be moving freely inside of a cave system, AFAIK i can't use physics libraries, so none of the above solves my problem. Up untill now i was using SAT for my collision detection. Are there any similar, great algorithms, but crafted for 3D collision? I'm not talking about octrees, or other optimalizations, i'm talking about direct collision detection of one set of 3D polygons with annother set of 3D polygons. I thought about using SAT twice, project the mesh from the top and the side, but then it seems so hard to even divide 3D space into convex shapes. Also that seems like far too much computation even with octrees. How do proffessionals do it? Could somebody shed some light.

    Read the article

  • Input signal out of range; Change settings to 1600 x 900

    - by Clayton
    I recently installed Ubuntu 12.04 onto my HP Pavilion, in an attempt to make the desktop able to dual-boot Windows 7 and Ubuntu. I managed to get down to the last step, and finished the installation process. After it prompted me to remove what I used to install Ubuntu, I did so, removing my SanDisk 8GB flash drive, and allowed the system to reboot. Like usual, the desktop booted with the HP image, with the options at the bottom(Boot Menu, System Recovery, etc). However, when it should have started up with Ubuntu(like I'm certain it should have done), I received the following error: Input signal out of range Change settings to 1600 x 900 From the time I installed the operating system, back in late August, till now, I've been trying to figure out how I would go about fixing this issue. My mom is also starting to get frustrated with my not having resolved the issue, as its the only desktop that has a printer installed. Is there any possible way to resolve this? To summarize the problem: -Successful boot -Screen brings up error -Screen goes to standby -Nothing else possible until desktop is rebooted, which will initiate the above three steps A few notes: -I did not back up my computer before I installed Ubuntu. I didn't have anything to write to, and basically just forgot to. : -I don't have a Recovery Disk. -I don't have the Windows 7 disk that is supposed to come with the computer. -It has been narrowed down by a friend on Skype that the problem lies with the display, and that the vga= boot command does have something to do with fixing the problem Thank you in advance for resolving this problem. I greatly appreciate it. ^^

    Read the article

  • Do you know the minimum builds to create on any branch?

    - by Martin Hinshelwood
    You should always have three builds on your team project. These should be setup and tested using an empty solution before you write any code at all. Figure: Three builds named in the format [TeamProject].[AreaPath]_[Branch].[Gate|CI|Nightly] for every branch.   These builds should use the same XAML build workflow; however you may set them up to run a different set of tests depending on the time it takes to run a full build. Gate – Only needs to run the smallest set of tests, but should run most if not all of the Unit Test. This is run before developers are allowed to check-in CI – This should run all Unit Tests and all of the automated UI tests. It is run after a developer check-in. Nightly – The Nightly build should run all of the Unit Tests, all of the Automated UI tests and all of the Load and Performance tests. The nightly build is time consuming and will run but once a night. Packaging of your Product for testing the next day may be done at this stage as well. Figure: You can control what tests are run and what data is collected while they are running. Note: We do not run all the tests every time because of the time consuming nature of running some tests, but ALL tests should be run overnight. Note: If you had a really large project with thousands of tests including long running Load tests you may need to add a Weekly build to the mix.     Figure: Bad example, you can’t tell what these builds do if they are in a larger list   Figure: Good example, you know exactly what project, branch and type of build these are for.   Technorati Tags: SSW,SSW Rules,VS2010,VS ALM,Team Build 2010,Team Build

    Read the article

  • Creating an expandable, cross-platform compatible program "core".

    - by Thomas Clayson
    Hi there. Basically the brief is relatively simple. We need to create a program core. An engine that will power all sorts of programs with a large number of distinct potential applications and deployments. The core will be an analytics and algorithmic processor which will essentially take user-specific input and output scenarios based on the information it gets, whilst recording this information for reporting. It needs to be cross platform compatible. Something that can have platform specific layers put on top which can interface with the core. It also needs to be able to be expandable, for instance, modular with developers being able to write "add-ons" or "extensions" which can alter the function of the end program and can use the core to its full extent. (For instance, a good example of what I'm looking to create is a browser. It has its main core, the web-kit engine, for instance, and then on top of this is has a platform-specific GUI and can also have add-ons and extensions which can change the behavior of the program.) Our problem is that the extensions need to interface directly with the main core and expand/alter that functionality rather than the platform specific "layer". So, given that I have no experience in this whatsoever (I have a PHP background and recently objective-c), where should I start, and is there any knowledge/wisdom you can impart on me please? Thanks for all the help and advice you can give me. :) If you need any more explanation just ask. At the moment its in the very early stages of development, so we're just researching all possible routes of development. Thanks a lot

    Read the article

  • Mouse wheel speed, a permanent solution?

    - by Logan
    I would like to address an issue that has been around for a while now. The wireless mouse's wheel speed is abnormally fast on Ubuntu (as well as Mac Osx, as I have read) and the way to fix this temporarily is to unplug and plug the wireless adapter. A solution for this have been asked for in various forum topics on Ubuntu forums and also askubuntu. However the best solution for this is to re-plug the wireless adapter for the mouse. This fixes the mouse wheel speed until the next reboot. My question is, what can be done to make this permanent? Can a shell script be written, or firstly what can be causing this? If you could give me some ideas on why this problem is occurring I would happily write a shell script for it... (I am thinking if this is fixed by a simple re-plug of the adapter, maybe a shell script to disable device and re-enable it or something like that... could do the trick) I appreciate any discussions and ideas on the subjects. Here are some already discussed topics on the same subject that I've researched: Mouse wheel jumpy on scrolling Mouse wheel scrolling too fast There's a lot more than that on the net, all of them ends with re-plug solution.

    Read the article

  • How to find date/time used by Cassandra

    - by JDI Lloyd
    Earlier this morning I noticed that one of the nodes in our Cassandra cluster is writing logs an hour in the future, despite the date/time being correct on the OS. A couple of other nodes I checked via logs appear to be writing logs at the correct time. I now need to go through and check each node in our 80 node cluster and ensure cassandra is running on the correct time, problem being is some of the nodes don't write to the logs very often as they aren't doing much... the question is, is there some form of tool/utility (ie nodetool) that can tell me the time that cassandra is running on? All the systems date/times are correct, ntpdate cron in place has been for a while. Servers are set to Belize timezone to avoid DST changes so its nothing to do with that.

    Read the article

  • How to securely delete files stored on a SSD?

    - by Chris Neuroth
    From a (very long, but definitely worth to read) article on SSDs: When you delete a file in your OS, there is no reaction from either a hard drive or SSD. It isn’t until you overwrite the sector (on a hard drive) or page (on a SSD) that you actually lose the data. File recovery programs use this property to their advantage and that’s how they help you recover deleted files. The key distinction between HDDs and SSDs however is what happens when you overwrite a file. While a HDD can simply write the new data to the same sector, a SSD will allocate a new (or previously used) page for the overwritten data. The page that contains the now invalid data will simply be marked as invalid and at some point it’ll get erased. So, what would be the best way to securely erase files stored on a SSD? Overwriting with random data as we are used to from hard disks (e.g. using the "shred" utility) won't work unless you overwrite the WHOLE drive...

    Read the article

  • Set up SSL/HTTPS in zend application via .htaccess

    - by davykiash
    I have been battling with .htaccess rules to get my SSL setup working right for the past few days.I get a requested URL not found error whenever I try access any requests that does not do through the index controller. For example this URL would work fine if I enter the it manually https://www.example.com/index.php/auth/register However my application has been built in such a way that the url should be this https://www.example.com/auth/register and that gives the requested URL not found error My other URLs such as https://www.example.com/index/faq https://www.example.com/index/blog https://www.example.com/index/terms work just fine. What rule do I need to write in my htaccess to get the URL https://www.example.com/auth/register working? My htaccess file looks like this RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L] RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] I posted an almost similar question in stackoverflow

    Read the article

  • Send encrypted mail using GPG by command-line?

    - by Mohammad AL-Rawabdeh
    A few days ago I asked about how I can secure email and many people advised me to use PGP tool, and I read about it and I use it. Now I want to write a batch file to send encrypted email with attachments. I know how I can generate key, exchange key with other side and encrypt email with PGP mail but until now I don't know how I can integrate PGP tool with my mail and how I can send the encrypted email. In other words, how can I send encrypted email that encrypts with PGP tool to other side by command line (batch file)?

    Read the article

  • MacOS X 10.6 Portable Home Directory sync fails due to FileSync agent crashing

    - by tegbains
    On one of our cleanly installed MacPro machines running MacOS X 10.6.6 connected to our MacOS X 10.6.6 Server, syncing data using Portable Home Directories fails. It seems to be due to the filesync agent crashing during the home sync. We get -41 and -8026 errors, which we are suspecting are indicating that there is too much data or filesync agent can't read the files. The user is the owner of the files and can read/write to all of the files. < Logout 0:: [11/02/04 13:10:42.751] Error -41 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2. (source = NO) < Logout 0:: [11/02/04 13:10:42.758] Error -8062 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2/[email protected]. (source = NO) < Logout 1:: [11/02/04 13:10:42.758] -[DeepCopyContext deepCopyError:sourceError:sourceRef:]: error = -8062, wasSource = NO: return shouldContinue = NO

    Read the article

  • Trying to script rsync using pam_exec

    - by Ricky-Rose
    I'm trying to write a bash script that will execute rsync when called by pam_exec. I've tried a couple different ways, and I'm not sure what I'm doing wrong. When I try to run the script at login by adding session optional pam_exec.so /usr/bin/local/sync.sh to my sshd file, it gives me an exit code of 12. if I log in and then manually run my script, it allows me to connect to the remote server, and it lists my files, but it doesn't actually sync anything. I have tried the code below using buth $USER and $PAM_USER. $PAM_USER doesn't work at all. #!/bin/sh rsync -azv -e ssh $USER@remote_server:/home/html/$USER/ /home/html/$USER

    Read the article

  • Virtual Desktop Provisioning - Vmware View 5.2 Maintenance Questions

    - by Lee J. DeAngelis
    Currently running an environment of about 400 VMware View 5.2 virtual Desktops. The environment runs pretty efficiently but we sometimes run into problems with certain pools from time to time. Just recently we had a pool that was causing high write latency when users logged in. It just happened all of a sudden and had been working fine for weeks. On a hunch we completely broke down the pool and re-provisioned it from a new image. This corrected the problem. In fact every real issue we've had so far was fixed by a recompose or complete break down and re-provisioning of one pool or another.Our environment consists of Cisco UCS and Netapp 3240s using flashcache running VMware View 5.2. My questions are: What are some maintenance best practices other VDI admins are using? How often are you recomposing? rebalancing? re-provisioning? How long should you keep base image snapshots around?

    Read the article

  • Best practices for using namespaces in C++.

    - by Dima
    I have read Uncle Bob's Clean Code a few months ago, and it has had a profound impact on the way I write code. Even if it seemed like he was repeating things that every programmer should know, putting them all together and putting them into practice does result in much cleaner code. In particular, I found breaking up large functions into many tiny functions, and breaking up large classes into many tiny classes to be incredibly useful. Now for the question. The book's examples are all in Java, while I have been working in C++ for the past several years. How would the ideas in Clean Code extend to the use of namespaces, which do not exist in Java? (Yes, I know about the Java packages, but it is not really the same.) Does it make sense to apply the idea of creating many tiny entities, each with a clearly define responsibility, to namespaces? Should a small group of related classes always be wrapped in a namespace? Is this the way to manage the complexity of having lots of tiny classes, or would the cost of managing lots of namespaces be prohibitive?

    Read the article

  • Goal Tracking data seems to be inaccurate?

    - by Khuram Malik
    I setup some Goal Tracking about one week ago. I had multiple goals in one set. The goal itself was the "send" button being pressed on the callback form (i did that by pushing a pageview to Google Analytics everytime the send button is pressed) For each goal, i listed the first step as a required step. So for example, the ILR Page was step 1 and set as required and the goal was "/CallbackFormFilled" Looking at the stats a week later i'm getting some very inflated numbers especially when comparing them to my manually filled excel spreadsheet and i'm struggling to understand the cause of this behaviour. I'm unable to attach screenshots unfortunately since my StackExchange account for this site is brand new My own thoughts My own thoughts were that maybe its because i have setup multiple goals with the same end goal URL, but i thought that was a valid setup since i want to track multiple routes so to speak(?) I've disabled all other goals for now to confirm this, but im waiting for stats to come in as i write this. I also wonder if the contact form im using in Wordpress is causing a problem, but i've simply added one javascript line on the send button that pushes a pageview so not sure if that should cause an issue. Here is a link to setting up analytics on this contact form plugin in wordpress for reference: (see javascript action hook section) - http://ideasilo.wordpress.com/2009/05/31/contact-form-7-1-10/

    Read the article

  • vhost.conf with plesk makes infinite loop

    - by user134598
    So I'm trying to make rewrite rules for my just migrated site and now we're using PLESK (unfortunately in my opinion). So, in order to make those rewrites I'm using the vhost.conf file in mydomain/conf folderm and I execute: /usr/local/psa/admin/sbin/websrvmng -u --vhost-name=mydomain.org so that includes my file into the httpd configuration. However, no matter what I write in my vhost.conf file, it will make my site go in an infinite loop whenever I try to load an URL that's not just the domain. Example: mydomain.org Works just fine. mydomain.org/event/nameofevent Will try endlessly to load and eventually my browser will detect that infinite loop. I though I was writing something incorrectly in my vhost.conf file but I even tried it with the file empty (not a single line). It will still try to load endlessly. Anybody can hint me if I'm skipping a step before (like any activation that should be done beorehand or something). Thanks in advance.

    Read the article

  • An Oracle's Interns Story by Samarth Varshney

    - by user769227
    I have written a short write-up about my experience at Oracle and am attaching some pics along:  I joined Oracle on 5th January 2011 as part of my internship program in BITS Pilani Goa Campus. In the short period of six months, I had the most beautiful and interesting time of my life. It was fun to work in Oracle, thanks to the whole team. I had an excellent manager, simple and sophisticated, who gave me the utmost enthusiasm to work. I gained a lot of knowledge during my internship, thanks to my colleagues. They were very helpful and motivated us (interns) in every possible way. In the initial stages of work, in which you know almost nothing, they helped me gain knowledge at a rapid speed. Thanks to the vast database of study material at the Oracle site, that I could start on with my project in a very short time.  For me, the time flew like anything and made the 6 months look like a few days. It was probably due to the team, that the work was so much fun. We had our deadlines but had full freedom as to how to work and when to work. I don't remember a single instance, in which I was working and not listening to songs. I mean it will always be a time to remember. I hope to join this company and make this time last forever.  Samarth 

    Read the article

  • Org-mode lags in highlighting source

    - by quanticle
    I'm using org-mode to maintain my programming notes. This means I have lots of source code blocks, as follows. #+begin_src <language name> <code> #+end_src One thing I've noticed is that when I write the #+end_src, emacs doesn't color the source code as such. Yet, if I quit emacs and reopen the notes file (or force a refresh with the Org-Refresh/Reload-Refresh setup current buffer menu entry) the source is colored grey if I'm using the GUI or green if I'm using emacs in the terminal. Is this an inherent limitation of emacs, or am I doing something wrong in setting up my code blocks that's preventing emacs from going back and recoloring the source code that I've entered?

    Read the article

  • Debugging Windows Service Timeout Error 1053

    - by Joe Mayo
    If you ever receive an Error 1053 for a timeout when starting a Windows Service you've written, the underlying problem might not have anything to do with a timeout.  Here's the error so you can compare it to what you're seeing: --------------------------- Services --------------------------- Windows could not start the Service1 service on Local Computer.   Error 1053: The service did not respond to the start or control request in a timely fashion. --------------------------- OK   --------------------------- Searching the Web for a solution to this error in your Windows Service will result in a lot of wasted time and advice that won't help.  Sometimes you might get lucky if your problem is exactly the same as someone else's, but this isn't always the case.  One of the solutions you'll see has to do with a known error on Windows Server 2003 that's fixed by a patch to the .NET Framework 1.1, which won't work.  As I write this blog post, I'm using the .NET Framework 4.0, which is a tad bit past that timeframe. Most likely, the basic problem is that your service is throwing an exception that you aren't handling.  To debug this, wrap your service initialization code in a try/catch block and log the exception, as shown below: using System; using System.Diagnostics; using System.ServiceProcess; namespace WindowsService { static class Program { static void Main() { try { ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new Service1() }; ServiceBase.Run(ServicesToRun); } catch (Exception ex) { EventLog.WriteEntry("Application", ex.ToString(), EventLogEntryType.Error); } } } } After you uninstall the old service, redeploy the service with modifications, and receive the same error message as above; look in the Event Viewer/Application logs.  You'll see what the real problem is, which is the underlying reason for the timeout. Joe

    Read the article

  • Java Applet or Unity3D for Cross-Platform 3D Surveying App

    - by Jake M
    Do you think a Java Applet or Unity3D Application is the best option to make a cross-browser 3d web-app? I intend to make a web application that displays 3d environments that can be navigated by dragging(with a finger or mouse depending on the platform). The web app will render 3d environments of development sites including contours, water pipeline locations, buildings etc. The application must work on Windows Desktop, Android, iOS and Windows Phone. So this is why I am tending towards a web-app as opposed to cross-platform smart phone library(like Mosync or Marmalade). The 3d environments will be navigable(by dragging around) and contain simple(not detailed) 3d objects like buildings, mountains, pipelines, etc. One thing I know is that WebGL is out because it doesn't work on IE and has limited support on Smart Phones(am I correct to completely disregard WebGL?). Will future Smart Phone browsers continue to support Java Applets? Also is it really true I can write ONE Application/Game in Unity3D and simply compile it to run on Windows Windows, Mac, Xbox 360, PlayStation 3, Wii, iPad, iPhone and Android? Would you suggest the Unity3D application path or the Unity3D Web Player path? Concerning Unity3D, there's one thing I am unsure about: do all Unity3D features work on iOS and Android?

    Read the article

  • Real world pitfalls of introducing F# into a large codebase and engineering team

    - by nganju
    I'm CTO of a software firm with a large existing codebase (all C#) and a sizable engineering team. I can see how certain parts of the code would be far easier to write in F#, resulting in faster development time, fewer bugs, easier parallel implementations, etc., basically overall productivity gains for my team. However, I can also see several productivity pitfalls of introducing F#, namely: 1) Everyone has to learn F#, and it's not as trivial as switching from, say, Java to C#. Team members that have not learned F# will be unable to work on F# parts of the codebase. 2) The pool of hireable F# programmers, as of now (Dec 2010) is non-existent. Search various software engineer resume databases for "F#", way less than 1% of resumes contain the keyword. 3) Community support as of now (Dec 2010) is less available. You can google almost any problem in C# and find someone that has already dealt with it, not so with F#. Third party tool support (NUnit, Resharper etc) is also sketchy. I realize that this is a bit Catch-22, i.e. if people like me don't use F# then the community and tools will never materialize, etc. But, I've got a company to run, and I can be cutting edge but not bleeding edge. Any other pitfalls I'm not considering? Or anyone care to rebut the pitfalls I've mentioned? I think this is an important discussion and would love to hear your counter-arguments in this public forum that may do a lot to increase F# adoption by industry. Thanks.

    Read the article

  • simple script to backup PostgreSQL database

    - by Mick
    Hello I write simple batch script to backup postgeSQL databases, but I find one strange problem whether the pg_dump command can specify a password? There is batch script: REM script to backup PostgresSQL databases @ECHO off FOR /f "tokens=1-4 delims=/ " %%i IN ("%date%") DO ( SET dow=%%i SET month=%%j SET day=%%k SET year=%%l ) SET datestr=%month%_%day%_%year% SET db1=opennms SET db2=postgres SET db3=sr_preproduction REM SET db4=sr_production ECHO datestr is %datestr% SET BACKUP_FILE1=D:\%db1%_%datestr%.sql SET FIlLENAME1=%db1%_%datestr%.sql SET BACKUP_FILE2=D:\%db2%_%datestr%.sql SET FIlLENAME2=%db2%_%datestr%.sql SET BACKUP_FILE3=D:\%db3%_%datestr%.sql SET FIlLENAME3=%db3%_%datestr%.sql SET BACKUP_FILE4=D:\%db14%_%datestr%.sql SET FIlLENAME4=%db4%_%datestr%.sql ECHO Backup file name is %FIlLENAME1% , %FIlLENAME2% , %FIlLENAME3% , %FIlLENAME4% ECHO off pg_dump -U postgres -h localhost -p 5432 %db1% > %BACKUP_FILE1% pg_dump -U postgres -h localhost -p 5432 %db2% > %BACKUP_FILE2% pg_dump -U postgres -h localhost -p 5432 %db3% > %BACKUP_FILE3% REM pg_dump -U postgres -h localhost -p 5432 %db4% > %BACKUP_FILE4% ECHO DONE ! Please give me advice Regards Mick

    Read the article

  • Mount /tmp in /mnt on EC2

    - by Claudio Poli
    I was wondering what is the best way to mount the /tmp endpoint in the ephemeral storage /mnt on an EC2 instance and give the ubuntu user default write permissions. Some suggest editing /etc/rc.local this way: mkdir -p /mnt/tmp && mount --bind -o nobootwait /mnt/tmp /tmp However that doesn't work for me. I tried editing the fstab: /dev/xvdb /tmp auto defaults,nobootwait,comment=cloudconfig 0 2 and giving it a umask=0777, however it doesn't work because of cloudconfig. I'm using Ubuntu 12.04.

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Design guideline for saving big byte stream in c# [migrated]

    - by Praveen
    I have an application where I am receiving big byte array very fast around per 50 miliseconds. The byte array contains some information like file name etc. The data (byte array ) may come from several sources. Each time I receive the data, I have to find the file name and save the data to that file name. I need some guide lines to how should I design it so that it works efficient. Following is my code... public class DataSaver { private static Dictionary<string, FileStream> _dictFileStream; public static void SaveData(byte[] byteArray) { string fileName = GetFileNameFromArray(byteArray); FileStream fs = GetFileStream(fileName); fs.Write(byteArray, 0, byteArray.Length); } private static FileStream GetFileStream(string fileName) { FileStream fs; bool hasStream = _dictFileStream.TryGetValue(fileName, out fs); if (!hasStream) { fs = new FileStream(fileName, FileMode.Append); _dictFileStream.Add(fileName, fs); } return fs; } public static void CloseSaver() { foreach (var key in _dictFileStream.Keys) { _dictFileStream[key].Close(); } } } How can I improve this code ? I need to create a thread maybe to do the saving.

    Read the article

< Previous Page | 536 537 538 539 540 541 542 543 544 545 546 547  | Next Page >