Search Results

Search found 2128 results on 86 pages for 'ole automation'.

Page 79/86 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • MVVM and avoiding Monolithic God object

    - by bufferz
    I am in the completion stage of a large project that has several large components: image acquisition, image processing, data storage, factory I/O (automation project) and several others. Each of these components is reasonably independent, but for the project to run as a whole I need at least one instance of each component. Each component also has a ViewModel and View (WPF) for monitoring status and changing things. My question is the safest, most efficient, and most maintainable method of instantiating all of these objects, subscribing one class to an Event in another, and having a common ViewModel and View for all of this. Would it best if I have a class called God that has a private instance of all of these objects? I've done this in the past and regretted it. Or would it be better if God relied on Singleton instances of these objects to get the ball rolling. Alternatively, should Program.cs (or wherever Main(...) is) instantiate all of these components, and pass them to God as parameters and then let Him (snicker) and His ViewModel deal with the particulars of running this projects. Any other suggestions I would love to hear. Thank you!

    Read the article

  • Continuous build infrastructure recommendations for primarily C++; GreenHills Integrity

    - by andersoj
    I need your recommendations for continuous build products for a large (1-2MLOC) software development project. Characteristics: ClearCase revision control Approx 80% C++; 15% Java; 5% script or low-level Compiles for Green Hills Integrity OS, but also some windows and JVM chunks Mostly an embedded system; also includes some UI pieces and some development support (simulation tools, config tools, etc...) Each notional "version" of the deliverable includes deployment images for a number of boards, UI machines, etc... (~10 separate images; 5 distinct operating systems) Need to maintain/track many simultaneous versions which, notably, are built for a variety of different board support packages Build cycle time is a major issue on the project, need support for whatever features help address this (mostly need to manage a large farm of build machines, I guess..) Operates in a secure environment (this is a gov't program) (Edited to add: This is a classified program; outsourcing the build infrastructure is a non-starter.) Interested in any best practices or peripheral guidance you might offer. The build automation issues is one of several overlapping best practices that appear to be missing on the program, but try to keep your answers focused on build infrastructure piece and observations directly related. Cost is not an object. Scalability and ease of retrofitting onto an existing infrastructure are key. JA

    Read the article

  • Python vs all the major professional languages [closed]

    - by Matt
    I've been reading up a lot lately on comparisons between Python and a bunch of the more traditional professional languages - C, C++, Java, etc, mainly trying to find out if its as good as those would be for my own purposes. I can't get this thought out of my head that it isn't good for 'real' programming tasks beyond automation and macros. Anyway, the general idea I got from about two hundred forum threads and blog posts is that for general, non-professional-level progs, scripts, and apps, and as long as it's a single programmer (you) writing it, a given program can be written quicker and more efficiently with Python than it could be with pretty much any other language. But once its big enough to require multiple programmers or more complex than a regular person (read: non-professional) would have any business making, it pretty much becomes instantly inferior to a million other languages. Is this idea more or less accurate? (I'm learning Python for my first language and want to be able to make any small app that I want, but I plan on learning C eventually too, because I want to get into driver writing eventually. So I've been trying to research each ones strengths and weaknesses as much as I can.) Anyway, thanks for any input

    Read the article

  • How to check a file saving is complete using Python?

    - by indrajithk
    I am trying to automate a downloading process. In this I want to know, whether a particular file's save is completed or not. The scenario is like this. Open a site address using either Chrome or Firefox (any browser) Save the page to disk using 'Crtl + S' (I work on windows) Now if the page is very big, then it takes few seconds to save. I want to parse the html once the save is complete. Since I don't have control on the browser save functionality, I don't know whether the save has completed or not. One idea I thought, is to get the md5sum of the file using a while loop, and check against the previous one calculated, and continue the while loop till the md5 sum from the previous and current one matches. This doesn't works I guess, as it seems browser first attempts to save the file in a tmp file and then copies the content to the specified file (or just renames the file). Any ideas? I use python for the automation, hence any idea which can be implemented using python is welcome. Thanks Indrajith

    Read the article

  • Why is CoRegisterClassObject creating two extra threads?

    - by Stijn Sanders
    I'm trying to fix a problem that only recently happens on a number of machine's on a VPN. They each run a client application I wrote that exposes a COM automation object. For some strange reason I haven't been able to discover yet, one thread in the application takes up all of the available CPU time, slowing other operation on the machine. In observing the application's strange behaviour, I've noticed it's the third thread started, and if I debug on my machine I notice the first call to CoRegisterClassObject created two extra threads. If the second of these two threads is the one that gets into an infinite loop, I'm not at all shure how to fix this. Where could I check next about what's wrong? Could it have started by one of the recent patches rolled out by Microsoft this last 'patch tuesday'? I had a go with ProcessExplorer to extract a stack trace of the thread: ntoskrnl.exe!ExReleaseResourceLite+0x1a3 ntoskrnl.exe!PsGetContextThread+0x329 WLDAP32.dll!Ordinal325+0x1231 WLDAP32.dll!Ordinal325+0x129e WLDAP32.dll!Ordinal325+0x1178 ntdll.dll!LdrInitializeThunk+0x24 ntdll.dll!LdrShutdownThread+0xe9 kernel32.dll!ExitThread+0x3e kernel32.dll!FreeLibraryAndExitThread+0x1e ole32.dll!StringFromGUID2+0x65d kernel32.dll!GetModuleFileNameA+0x1ba

    Read the article

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

  • Copy Word format into Outlook message

    - by Jaster
    Hi, I have an outlook automation. I would like to use a Word document as template for the message content. Lets say i have some formatted text containing tables, colors, sizes, etc. Now I'd like to copy/paste this content into an Outlook message object.I'm used to the interop stuff, i just have no idea how to copy/paste this correctly. Here is some Sample Code (no cleanup): String path = @"file.docx"; String savePath = @"file.msg"; Word.Application wordApp = new Word.Application(); Word.Document currentDoc = wordApp.Documents.Open(path); Word.Range range = currentDoc.Range(0, m_CurrentDoc.Characters.Count); String wordText = range.Text; oApp = new Outlook.Application(); Outlook.NameSpace ns = oApp.GetNamespace("MAPI"); ns.Logon("MailBox"); Outlook._MailItem oMsg = oApp.CreateItem(Outlook.OlItemType.olMailItem); oMsg.To = "[email protected]"; oMsg.Body = wordtext; oMsg.SaveAs(savePath); Using Outlook/Word 2007, however the word files still mayb in 2000/2003 format (.doc). Visual Studio 2010 with .net 4.0 (should obvious due to the samplecode). Any suggestions?

    Read the article

  • Mono Text Based Web Browser

    - by powerbox
    Hi guys, is there any public text based web browser implementation for C# or on mono base api that I can use to fill up web forms automatically? I'll be using it to automate some web task that does not require any image authentication. I'm currently using a web browser control available on .Net Framework and waits for the event WebBrowserDocumentCompletedEventHandler to fire after a page is successfully loaded and invoke some actions like Submit or simulating a mouse click on some links. It actually does the job but I can't process bulk transactions since I needed to wait for the whole page to be loaded together with the images and other stuff. It is easy to use HttpWebRequest to fill up some forms , provide some data and then submit. But on some occasions I only need to simulate a mouse click to a certain link which I don't know how to do with HttpWebRequest. By the way using HttpWebRequest will still download all the images of a web page that I see pointless since I only need to provide correct data back to the server. I hope someone can pinpoint me the correct way of doing this kind of automation and thanks in advance!

    Read the article

  • Right Language for the Job

    - by Manoj
    Using the right language for the job is the key - this is the comment I read in SO and I also belive thats the right thing to do. Because of this we ended up using different languages for different parts of the project - like perl, VBA(Excel Macros), C# etc. We have three to four languages currently in use inside the project. Using the right language for the job has made it immensly more easy to do automate a job, but of late people are complaining that any new person who has to take over the project will have to learn so many different languages to get started. Also it is difficult to find such kind of person. Please note that this is a one to two person working on the project maximum at a given point of time. I would like to know if the method we are following is right or should we converge to single language and try to use it across all the job even though another language might be better suited for it. Your experenece related to this would also help. Languages used and their purpose: Perl - Processing large text file(log files) C# with Silverlight for web based reporting. LabVIEW for automation Excel macros for processing data in excel sheets, generating graphs and exporting to powerpoint.

    Read the article

  • C# and Excel best practices

    - by rlp
    I am doing a lot of MS Excel interop i C# (Visual Studio 2012) using Microsoft.Office.Interop.Excel. It requires a lot of tiresome manual code to include Excel formulas, doing formatting of text and numbers, and making graphs. I would like it very much if any of you have some input on how I do the task better. I have been looking at Visual Studio Tools for Office, but I am uncertain on its functions. I get it is required to make Excel add-ins, but does it help doing Excel automation? I have desperately been trying to find information on working with Excel in Visual Studio 2012 using C#. I did found some good but short tutorials. However I really would like a book an the subject to learn the field more in depth regarding functionality and best practices. Searching Amazon with my limited knowlegde only gives me book on VSTO using older versions of Visual Studio. I would not like to use VBA. My applications use Excel mainly for visualizing compiled from different sources. I also to data processing where Excel is not required. Futhermore, I can write C# but not VB.

    Read the article

  • which package i should choose, if i want to install virtualenv for python?

    - by hugemeow
    pip search just returns so many matches, i am confused about which package i should choose to install .. should i only install virtualenv? or i'd better also install virtualenv-commands and virtualenv-commands, etc, but i really don't know exactly what virtualenv-commands is ... mirror0@lab:~$ pip search virtualenv virtualenvwrapper - Enhancements to virtualenv virtualenv - Virtual Python Environment builder veh - virtualenv for hg pyutilib.virtualenv - PyUtilib utility for building custom virtualenv bootstrap scripts. envbuilder - A package for automatic generation of virtualenvs virtstrap-core - A bootstrapping mechanism for virtualenv+pip and shell scripts tox - virtualenv-based automation of test activities virtualenvwrapper-win - Port of Doug Hellmann's virtualenvwrapper to Windows batch scripts everyapp.bootstrap - Enhanced virtualenv bootstrap script creation. orb - pip/virtualenv shell script wrapper monupco-virtualenv-python - monupco.com registration agent for stand-alone Python virtualenv applications virtualenvwrapper-powershell - Enhancements to virtualenv (for Windows). A clone of Doug Hellmann's virtualenvwrapper RVirtualEnv - relocatable python virtual environment virtualenv-clone - script to clone virtualenvs. virtualenvcontext - switch virtualenvs with a python context manager lessrb - Wrapper for ruby less so that it's in a virtualenv. carton - make self-extracting virtualenvs virtualenv5 - Virtual Python 3 Environment builder clever-alexis - Clever redhead girl that builds and packs Python project with Virtualenv into rpm, deb, etc. kforgeinstall - Virtualenv bootstrap script for KForge pypyenv - Install PyPy in virtualenv virtualenv-distribute - Virtual Python Environment builder virtualenvwrapper.project - virtualenvwrapper plugin to manage a project work directory virtualenv-commands - Additional commands for virtualenv. rjm.recipe.venv - zc.buildout recipe to turn the entire buildout tree into a virtualenv virtualenvwrapper.bitbucket - virtualenvwrapper plugin to manage a project work directory based on a BitBucket repository tg_bootstrap - Bootstrap a TurboGears app in a VirtualEnv django-env - Automaticly manages virtualenv for django project virtual-node - Install node.js into your virtualenv django-environment - A plugin for virtualenvwrapper that makes setting up and creating new Django environments easier. vip - vip is a simple library that makes your python aware of existing virtualenv underneath. virtualenvwrapper.django - virtualenvwrapper plugin to create a Django project work directory terrarium - Package and ship relocatable python virtualenvs venv_dependencies - Easy to install any dependencies in a virtualenviroment(without making symlinks by hand and etc...) virtualenv-sh - Convenient shell interface to virtualenv virtualenvwrapper.github - Plugin for virtualenvwrapper to automatically create projects based on github repositories. virtualenvwrapper.configvar - Plugin for virtualenvwrapper to automatically export config vars found in your project level .env file. virtualenvwrapper-emacs-desktop - virtualenvwrapper plugin to control emacs desktop mode bootstrapper - Bootstrap Python projects with virtualenv and pip. virtualenv3 - Obsolete fork of virtualenv isotoma.depends.zope2_13_8 - Running zope in a virtualenv virtual-less - Install lessc into your virtualenv virtualenvwrapper.tmpenv - Temporary virtualenvs are automatically deleted when deactivated isotoma.plone.heroku - Tooling for running Plone on heroku in a virtualenv gae-virtualenv - Using virtualenv with zipimport on Google App Engine pinvenv - VirtualEnv plugins for pin isotoma.depends.plone4_1 - Running plone in a virtualenv virtualenv-tools - A set of tools for virtualenv virtualenvwrapper.npm - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any npm installed globaly when the venv is activated d51.django.virtualenv.test_runner - Simple package for running isolated Django tests from within virtualenv difio-virtualenv-python - Difio registration agent for stand-alone Python virtualenv applications VirtualEnvManager - A package to manage various virtual environments. virtualenvwrapper.gem - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any gems installed when the venv is activated

    Read the article

  • SQL Server High Availability - Mirroring with MSCS?

    - by David
    I'm looking at options for high-availability for my SQL Server-powered application. The requirements are: HA protection from storage failure. Data accessibility when one of the DB servers is undergoing software updates (e.g. planned outage for Windows Update / SQL Server service-packs). Must not involve much in the way of hardware procurement. The application is an ASP.NET web application. The web application's users have their own database instances. I've seen two main options: SQL Server failover clustering, and SQL Server mirroring. I understand that SQL Server Failover Clustering requires the purchasing of a shared disk array and doesn't offer any protection if the shared storage goes down (so the documentation recommends to set up a Mirroring between two clusters). Database Mirroring seems the cheaper option (as it only requires two database servers and a simple witness box) - but I've heard it doesn't work well when you have a large number of databases. The application I'm developing involves giving each client their own database for their application - there could be hundreds of databases. Setting up the mirroring is no problem thanks to the automation systems we have in place. My final point concerns how failover works with respect to client connections - SQL Server Failover Clustering uses MSCS which means that the cluster is invisible to clients - a connection attempt might fail during the failover, but a simple reconnect will have it working again. However mirroring, as far as I know, requires that the client be aware of the mirrored partners: if the client cannot connect to the primary server then it tries the secondary server. I'm wondering how this work with respect to Connection Pooling in ASP.NET applications - does the client connection failovering mean that there's a potential 2-second (assuming 2000ms TCP timeout policy) pause when the connection pool tries the primary server on every connection attempt? I read somewhere that Mirroring can be used on top of MSCS which means that the client does not need to be aware of mirroring (so there wouldn't be any potential delays during connection, and also that no changes would need to be made to the client, not even the connection string) - however I'm finding it hard to get documentation or white papers on this approach. But if true, then it means the best method is then Mirroring (for HA) with MSCS (for client ignorance and connection performance). ...but how does this scale to a server instance that might contain hundreds of mirrored databases?

    Read the article

  • Providing DNS redirection to honeypot server for known bad domains

    - by syn-
    Currently running BIND on RHEL 5.4 and am looking for a more efficient manner of providing DNS redirection to a honeypot server for a large (30,000+) list of forbidden domains. Our current solution for this requirement is to include a file containing a zone master declaration for each blocked domain in named.conf. Subsequently, each of these zone declarations point to the same zone file, which resolves all hosts in that domain to our honeypot servers. ...basically this allows us to capture any "phone home" attempts by malware that may infiltrate the internal systems. The problem with this configuration is the large amount of time taken to load all 30,000+ domains as well as management of the domain list configuration file itself... if any errors creep into this file, the BIND server will fail to start, thereby making automation of the process a little frightening. So I'm looking for something more efficient and potentially less error prone. named.conf entry: include "blackholes.conf"; blackholes.conf entry example: zone "bad-domain.com" IN { type master; file "/var/named/blackhole.zone"; allow-query { any; }; notify no; }; blackhole.zone entries: $INCLUDE std.soa @ NS ns1.ourdomain.com. @ NS ns2.ourdomain.com. @ NS ns3.ourdomain.com.                        IN            A                192.168.0.99 *                      IN            A                192.168.0.99

    Read the article

  • Revolutionary brand powder packing machine price from affecting marketplace boom and put on uniform in addition to a lengthy service life

    - by user74606
    In mining in stone crushing, our machinery company's encounter becomes much more apparent. As a consequence of production capacity in between 600~800t/h of mining stone crusher, stone is mine Mobile Cone Crushing Plant Price 25~40 times, effectively solved the initially mining stone crusher operation because of low yield prices, no upkeep problems. Full chunk of mining stone crusher. Maximum particle size for crushing 1000x1200mm, an effective answer for the original side is mine stone provide, storing significant chunks of stone can not use complications in mines. Completed goods granularity is modest, only 2~15mm, an effective option for the original mine stone size, generally blocking chute production was an issue even the grinding machine. Two types of material mixed great uniformity, desulfurization of mining stone by adding weight considerably. Present quantity added is often reached 60%, effectively minimizing the cost of raw supplies. Electrical energy consumption has fallen. Dropped 1~2KWh/t tons of mining stone electrical energy consumption, annual electricity savings of one hundred,000 yuan. Efficient labor intensity of workers and also the atmosphere. Due to mine stone powder packing machine price a high degree of automation, with out human make contact with supplies, workers working circumstances enhanced significantly. Positive aspects, and along with mine for stone crushing, CS series cone Crusher has the following efficiency traits. CS series cone Crusher Chamber is divided into 3 unique designs, the user is usually chosen in accordance with the scenario on site crushing efficiency is high, uniform item size, grain shape, rolling mortar wall friction and put on uniform in addition to a extended service life of crushing cavity-. CS series cone Crusher utilizes a one of a kind dust-proof seal, sealing dependable, properly extend the service life of the lubricant replacement cycle and parts. CS series Sprial Sand washer price manufacture of important components to choose unique materials. Each and every stroke left rolling mortar wall of broken cone distances, by permitting a lot more products into the crushing cavity, as well as the formation of big discharge volume, speed of supplies by way of the crushing Chamber. This machine makes use of the principle of crushing cavity, also as unique laminated crushing, particle fragmentation, so that the completed product drastically improved the proportions of a cube, needle-shaped stones to lower particle levels extra evenly.

    Read the article

  • Could I centralize batch files more efficiently?

    - by PeanutsMonkey
    I am new to the world of batch scripting so please forgive what may appear as basic questions. I am learning as I get assigned different jobs and I am a huge proponent of automation where possible. I have several batch files that perform several tasks. Each of these files had their paths hard-coded e.g. c:\temp. d:\data, etc in the batch file. Initially I moved these to a text file I could call from a batch file e.g. for /f "tokens=1,2 delims==" %%R in (config.txt) do ( if %%R==bdata set bdata=%%S if %%R==cdata set cdata=%%S ) The config.txt file contains these values bdata=c:\temp cdata=d:\data I realized that each time I would need to create a new variable, I would need to update the config.txt file as well the config.bat files. I decided I would move all the values to just the config.bat file as follows set bdata=c:\temp set cdata=d:\data I then updated each of the existing batch files to call the variables rather than the hard-coded paths. I also added the following lines of code to each batch file except config.bat. The only additional line added to the config.bat file is @echo off. @echo off setlocal enableextensions enabledelayedexpansion call config.bat I then have another batch file that centralizes calling all the batch files in sequence. The name of this batch file is start.bat. The reason I am using start /wait is because there have been instances of where the delete.bat runs before compress.bat has had an opportunity to finish. start /wait compress.bat start /wait validate.bat start /wait delete.bat Questions Is this the best way to centralize values and if not, what is a better way? Do I need to specify setlocal enableextensions enabledelayedexpansion in all the existing batch files? Do all the batch files have to have @echo off or is it sufficient for just the config.bat file? Is start /wait the best way to call multiple files? Can I pass values from one batch file to another using the said command? All the batch files have different functions e.g. move, delete, etc however use %%a or %%b. Is this okay? For example The validate.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == "" move /Y "%bdata%\%%~xa" "%bdata%\%done%" and the delete.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == ".txt" del "%%a"

    Read the article

  • Automatic layout of manual network mapping

    - by Paul
    So I have a small business network mainly consisting of two routed layer-2 domains with a total of ca. 100 devices spread over ca. 2000m² production and office spaces. Typical problems to solve using the graph would be: Over what (cable) path is a PC connected to the server? Where to expect devices connected to a switch port? I want to generate a graph of the physical network topology: Nodes are endpoint devices, switch ports, wall outlets, patch panel ports etc. Edges are cable connections. Ideally, grouping edges (or segments) that pass through the same bundle could be grouped. Also I would like to augment the graph data with automatically gathered data (monitoring state, MAC address, Switch port <- MAC entries to build up parts of the map). At the moment I use graphviz for this inside a Confluence wiki like that: layout = "neato" overlap = scale subgraph { rankdir = "TB" subgraph cluster_r1pf1 { r1pf1 [label="{ Rack 1 PF 1 | { <p1>P1 | <p2>P2 | <p3>P3} }", shape=record] } subgraph cluster_switch1 { switch1 [label="{ Rack 1 Switch 1 | { <p1> P1 | <p1> P1 | <p3> P3} }", shape=record] } r1pf1:p1 -> switch1:p1 (obviously there are dozens of entries omitted here) Problem is: I have a hard time to influence graphviz to generate a bearable layout. Edges overlap so bad that you can't read the diagram anymore. The question is: What other tools (be it interactive like Visio, Omnigraffle or I/O-oriented like graphviz) exist that would allow an easily versionable (as in: Operates on a text file) documentation that is both machine and human readable and editable? Why not OmniGraffle or Visio? Well we don't have Macs and Visio is not available at the moment. To buy it I would need good arguments. Automation would be one of that. But last time I looked, versioning Visio files or even thinking about automatic handling was a nightmare. Related: Network Mapping Tools basically asks the same with a focus on generating the complete graph automatically (but without the need to document cabling connections) Recommendations for automatic computer inventory brings up links of "all-in-one" solutions

    Read the article

  • Start a software company offshore

    - by Mascarpone
    Hello Everybody, I own a small, very young, EU based (Italy) company, and among other things, we sell IT solutions. I have a degree in applied mathematics, and I mainly deal with user interfaces, embedded systems, automation and web applications. You can say that I'm an enlightened entrepreneur because I work only with open source software (OS, IDE, I release under BSD , ... everything is free as in freedom), I give high importance to post sales services and customer satisfaction, plus I think I'm the best boss someone could desire (LOL), as I have google in mind when I think about IT workers rights. But the most beautiful thing is that, although everybody advised us not to use open source, is that we are quite profitable!!! (for the sixth trimester in a row). Now I offshore most of the work to an Indian company. I divide the work in modules and I outsource the longer or more trivial ones. I spend a lot of time defining the specifications and I leave the hard work to them. Using productivity bonuses, a lot of prototypes and third-party audits I think that my software has reached a very good quality level. I would like to start my own software development company, in order to improve control over process and cut costs. Obviously I can't afford the cost of labor in the EU, so I thought about opening a company in Asia. What I need Is: 1) Cheap labor - I can afford to give productivity bonuses and higher than average wages and stay profitable just because labor is cheap. 2) Many talents - I need a good level of tertiary education, and a good number of graduates, so I can hire junior developers and train and teach them according to my needs and philosophies (e.g.: open source mind) 3) Good infrastructure - buildings, transport, internet, .... everything that a company might need. I thought about 3 possible candidates: 1) India - I already work with indian people, I know that they are realiable and speak a good english. Big cities are too expensive, but maybe a small city like lucknow http://en.wikipedia.org/wiki/Lucknow could suits my needs. 2) China - They say it's cheaper than India, but I everytime I worked with a chineese company the language was a big barrier. They work hard, are somewhat skilled and cheap but maybe it's a risky path. Plus I feel a little uncofortable with their lack of human rights. 3) Philippines - Same as china: cheaper than india, but maybe less educated. Where do you think it's the best place to start a software company? Any reading or book to advise? thank you very much

    Read the article

  • How to set up a file server in a restricted corporate environment

    - by Emilio M Bumachar
    I work in a big corporation, and the disk space my team gets in the corporate file server is so low, I am considering turning my work PC into a file server. I ask this community for links to tutorials, software suggestions, and advice in general about how to set it up. My machine is an Intel Core2Duo E7500 @ 3GHz, 3 GB of RAM, Running Windows XP Service Pack 3. Upgrading, formatting or installing another OS is out of the question. But I do have Administrator priviledges on the PC, and I can install programs (at least for now). A lot of security software I don't even know about is and must remain installed. But I only need communication whithin the corporate network, which is not restricted. People have usernames (logins) on the corporate network, and I need to use them to restrict access. Simply put, I have a list of logins of team members, and only people in the list should access the files. I have about 150 GB of free disk space. I'm thinking of allocating 100 GB to the team's shared files. I plan monthly backups on machines of co-workers, same configuration. But automation of backups is a nice, unnecessary feature: it's totally acceptable for me to manually copy the contents to a different machine once a month. Uptime is important, as everyone would use these files in their daily work. I have experience as a python and C programmer, but no experience whatsoever as a sysadmin, and almost nothing of my programming experience is network programming. I'm a complete beginner in this. Thanks in advance for any help. EDIT I honestly appreciate all the warnings, I really do, but what I plan to make available is mostly stuff that now is solely on DVDs just for space reasons. It's 'daily work' to read them, but 'daily work write' files will remain on the corporate server. As for the importance of uptime, I think I overstated it: a few outages are OK, it's already an improvement over getting the DVDs. As for policy, my manager is kind of on my side, I will confirm that before making my move. As for getting more space through the proper channels, well, that was Plan A, and it's still on the table... But I don't have much hope. I'm not as "core businees" as I'd like.

    Read the article

  • Creating packages in code - Package Configurations

    Continuing my theme of building various types of packages in code, this example shows how to building a package with package configurations. Incidentally it shows you how to add a variable, and a connection too. It covers the five most common configurations: Configuration File Indirect Configuration File SQL Server Indirect SQL Server Environment Variable  For a general overview try the SQL Server Books Online Package Configurations topic. The sample uses a a simple helper function ApplyConfig to create or update a configuration, although in the example we will only ever create. The most useful knowledge is the configuration string (Configuration.ConfigurationString) that you need to set. Configuration Type Configuration String Description Configuration File The full path and file name of an XML configuration file. The file can contain one or more configuration and includes the target path and new value to set. Indirect Configuration File An environment variable the value of which contains full path and file name of an XML configuration file as per the Configuration File type described above. SQL Server A three part configuration string, with each part being quote delimited and separated by a semi-colon. -- The first part is the connection manager name. The connection tells you which server and database to look for the configuration table. -- The second part is the name of the configuration table. The table is of a standard format, use the Package Configuration Wizard to help create an example, or see the sample script files below. The table contains one or more rows or configuration items each with a target path and new value. -- The third and final part is the optional filter name. A configuration table can contain multiple configurations, and the filter is  literal value that can be used to group items together and act as a filter clause when configurations are being read. If you do not need a filter, just leave the value empty. Indirect SQL Server An environment variable the value of which is the three part configuration string as per the SQL Server type described above. Environment Variable An environment variable the value of which is the value to set in the package. This is slightly different to the other examples as the configuration definition in the package also includes the target information. In our ApplyConfig function this is the only example that actually supplies a target value for the Configuration.PackagePath property. The path is an XPath style path for the target property, \Package.Variables[User::Variable].Properties[Value], the equivalent of which can be seen in the screenshot below, with the object being our variable called Variable, and the property to set is the Value property of that variable object. The configurations as seen when opening the generated package in BIDS: The sample code creates the package, adds a variable and connection manager, enables configurations, and then adds our example configurations. The package is then saved to disk, useful for checking the package and testing, before finally executing, just to prove it is valid. There are some external resources used here, namely some environment variables and a table, see below for more details. namespace Konesans.Dts.Samples { using System; using Microsoft.SqlServer.Dts.Runtime; public class PackageConfigurations { public void CreatePackage() { // Create a new package Package package = new Package(); package.Name = "ConfigurationSample"; // Add a variable, the target for our configurations package.Variables.Add("Variable", false, "User", 0); // Add a connection, for SQL configurations // Add the SQL OLE-DB connection ConnectionManager connectionManagerOleDb = package.Connections.Add("OLEDB"); connectionManagerOleDb.Name = "SQLConnection"; connectionManagerOleDb.ConnectionString = "Provider=SQLOLEDB.1;Data Source=(local);Initial Catalog=master;Integrated Security=SSPI;"; // Add our example configurations, first must enable package setting package.EnableConfigurations = true; // Direct configuration file, see sample file this.ApplyConfig(package, "Configuration File", DTSConfigurationType.ConfigFile, "C:\\Temp\\XmlConfig.dtsConfig", string.Empty); // Indirect configuration file, the emvironment variable XmlConfigFileEnvironmentVariable // contains the path to the configuration file, e.g. C:\Temp\XmlConfig.dtsConfig this.ApplyConfig(package, "Indirect Configuration File", DTSConfigurationType.IConfigFile, "XmlConfigFileEnvironmentVariable", string.Empty); // Direct SQL Server configuration, uses the SQLConnection package connection to read // configurations from the [dbo].[SSIS Configurations] table, with a filter of "SampleFilter" this.ApplyConfig(package, "SQL Server", DTSConfigurationType.SqlServer, "\"SQLConnection\";\"[dbo].[SSIS Configurations]\";\"SampleFilter\";", string.Empty); // Indirect SQL Server configuration, the environment variable "SQLServerEnvironmentVariable" // contains the configuration string e.g. "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; this.ApplyConfig(package, "Indirect SQL Server", DTSConfigurationType.ISqlServer, "SQLServerEnvironmentVariable", string.Empty); // Direct environment variable, the value of the EnvironmentVariable environment variable is // applied to the target property, the value of the "User::Variable" package variable this.ApplyConfig(package, "EnvironmentVariable", DTSConfigurationType.EnvVariable, "EnvironmentVariable", "\\Package.Variables[User::Variable].Properties[Value]"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); Console.WriteLine(@"C:\Temp\{0}.dtsx", package.Name); #endif // Execute package package.Execute(); // Basic check for warnings foreach (DtsWarning warning in package.Warnings) { Console.WriteLine("WarningCode : {0}", warning.WarningCode); Console.WriteLine(" SubComponent : {0}", warning.SubComponent); Console.WriteLine(" Description : {0}", warning.Description); Console.WriteLine(); } // Basic check for errors foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); Console.WriteLine(); } package.Dispose(); } /// <summary> /// Add or update an package configuration. /// </summary> /// <param name="package">The package.</param> /// <param name="name">The configuration name.</param> /// <param name="type">The type of configuration</param> /// <param name="setting">The configuration setting.</param> /// <param name="target">The target of the configuration, leave blank if not required.</param> internal void ApplyConfig(Package package, string name, DTSConfigurationType type, string setting, string target) { Configurations configurations = package.Configurations; Configuration configuration; if (configurations.Contains(name)) { configuration = configurations[name]; } else { configuration = configurations.Add(); } configuration.Name = name; configuration.ConfigurationType = type; configuration.ConfigurationString = setting; configuration.PackagePath = target; } } } The following table lists the environment variables required for the full example to work along with some sample values. Variable Sample value EnvironmentVariable 1 SQLServerEnvironmentVariable "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; XmlConfigFileEnvironmentVariable C:\Temp\XmlConfig.dtsConfig Sample code, package and configuration file. ConfigurationApplication.cs ConfigurationSample.dtsx XmlConfig.dtsConfig

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Maintenance plans love story

    - by Maria Zakourdaev
    There are about 200 QA and DEV SQL Servers out there.  There is a maintenance plan on many of them that performs a backup of all databases and removes the backup history files. First of all, I must admit that I’m no big fan of maintenance plans in particular or the SSIS packages in general.  In this specific case, if I ever need to change anything in the way backup is performed, such as the compression feature or perform some other change, I have to open each plan one by one. This is quite a pain. Therefore, I have decided to replace the maintenance plans with a stored procedure that will perform exactly the same thing.  Having such a procedure will allow me to open multiple server connections and just execute an ALTER PROCEDURE whenever I need to change anything in it. There is nothing like good ole T-SQL. The first challenge was to remove the unneeded maintenance plans. Of course, I didn’t want to do it server by server.  I found the procedure msdb.dbo.sp_maintplan_delete_plan, but it only has a parameter for the maintenance plan id and it has no other parameters, like plan name, which would have been much more useful. Now I needed to find the table that holds all maintenance plans on the server. You would think that it would be msdb.dbo.sysdbmaintplans but, unfortunately, regardless of the number of maintenance plans on the instance, it contains just one row.    After a while I found another table: msdb.dbo.sysmaintplan_subplans. It contains the plan id that I was looking for, in the plan_id column and well as the agent’s job id which is executing the plan’s package: That was all I needed and the rest turned out to be quite easy.  Here is a script that can be executed against hundreds of servers from a multi-server query window to drop the specific maintenance plans. DECLARE @PlanID uniqueidentifier   SELECT @PlanID = plan_id FROM msdb.dbo.sysmaintplan_subplans Where name like ‘BackupPlan%’   EXECUTE msdb.dbo.sp_maintplan_delete_plan @plan_id=@PlanID   The second step was to create a procedure that will perform  all of the old maintenance plan tasks: create a folder for each database, backup all databases on the server and clean up the old files. The script is below. Enjoy.   ALTER PROCEDURE BackupAllDatabases                                   @PrintMode BIT = 1 AS BEGIN          DECLARE @BackupLocation VARCHAR(500)        DECLARE @PurgeAferDays INT        DECLARE @PurgingDate VARCHAR(30)        DECLARE @SQLCmd  VARCHAR(MAX)        DECLARE @FileName  VARCHAR(100)               SET @PurgeAferDays = -14        SET @BackupLocation = '\\central_storage_servername\BACKUPS\'+@@servername               SET @PurgingDate = CONVERT(VARCHAR(19), DATEADD (dd,@PurgeAferDays,GETDATE()),126)               SET @FileName = '?_full_'+                      + REPLACE(CONVERT(VARCHAR(19), GETDATE(),126),':','-')                      +'.bak';          SET @SQLCmd = '               IF ''?'' <> ''tempdb'' BEGIN                      EXECUTE master.dbo.xp_create_subdir N'''+@BackupLocation+'\?\'' ;                        BACKUP DATABASE ? TO  DISK = N'''+@BackupLocation+'\?\'+@FileName+'''                      WITH NOFORMAT, NOINIT,  SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10 ;                        EXECUTE master.dbo.xp_delete_file 0,N'''+@BackupLocation+'\?\'',N''bak'',N'''+@PurgingDate+''',1;               END'          IF @PrintMode = 1 BEGIN               PRINT @SQLCmd        END               EXEC sp_MSforeachdb @SQLCmd        END

    Read the article

  • New Product: Oracle Java ME Embedded 3.2 – Small, Smart, Connected

    - by terrencebarr
    The Internet of Things (IoT) is coming. And, with todays launch of the Oracle Java ME Embedded 3.2 product, Java is going to play an even greater role in it. Java in the Internet of Things By all accounts, intelligent embedded devices are penetrating the world around us – driving industrial processes, monitoring environmental conditions, providing better health care, analyzing and processing data, and much more. And these devices are becoming increasingly connected, adding another dimension of utility. Welcome to the Internet of Things. As I blogged yesterday, this is a huge opportunity for the Java technology and ecosystem. To enable and utilize these billions of devices effectively you need a programming model, tools, and protocols which provide a feature-rich, consistent, scalable, manageable, and interoperable platform.  Java technology is ideally suited to address these technical and business problems, enabling you eliminate many of the typical challenges in designing embedded solutions. By using Java you can focus on building smarter, more valuable embedded solutions faster. To wit, Java technology is already powering around 10 billion devices worldwide. Delivering on this vision and accelerating the growth of embedded Java solutions, Oracle is today announcing a brand-new product: Oracle Java Micro Edition (ME) Embedded 3.2, accompanied by an update release of the Java ME Software Development Kit (SDK) to version 3.2. What is Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a complete Java runtime client, optimized for ARM architecture connected microcontrollers and other resource-constrained systems. The product provides dedicated embedded functionality and is targeted for low-power, limited memory devices requiring support for a range of network services and I/O interfaces.  What features and APIs are provided by Oracle Java ME Embedded 3.2? Oracle Java ME Embedded 3.2 is a Java ME runtime based on CLDC 1.1 (JSR-139) and IMP-NG (JSR-228). The runtime and virtual machine (VM) are highly optimized for embedded use. Also included in the product are the following optional JSRs and Oracle APIs: File I/O API’s (JSR-75)  Wireless Messaging API’s (JSR-120) Web Services (JSR-172) Security and Trust Services subset (JSR-177) Location API’s (JSR-179) XML API’s (JSR-280)  Device Access API Application Management System (AMS) API AccessPoint API Logging API Additional embedded features are: Remote application management system Support for continuous 24×7 operation Application monitoring, auto-start, and system recovery Application access to peripheral interfaces such as GPIO, I2C, SPIO, memory mapped I/O Application level logging framework, including option for remote logging Headless on-device debugging – source level Java application debugging over IP Connection Remote configuration of the Java VM What type of platforms are targeted by Oracle Java ME 3.2 Embedded? The product is designed for embedded, always-on, resource-constrained, headless (no graphics/no UI), connected (wired or wireless) devices with a variety of peripheral I/O.  The high-level system requirements are as follows: System based on ARM architecture SOCs Memory footprint (approximate) from 130 KB RAM/350KB ROM (for a minimal, customized configuration) to 700 KB RAM/1500 KB ROM (for the full, standard configuration)  Very simple embedded kernel, or a more capable embedded OS/RTOS At least one type of network connection (wired or wireless) The initial release of the product is delivered as a device emulation environment for x86/Windows desktop computers, integrated with the Java ME SDK 3.2. A standard binary of Oracle Java ME Embedded 3.2 for ARM KEIL development boards based on ARM Cortex M-3/4 (KEIL MCBSTM32F200 using ST Micro SOC STM32F207IG) will soon be available for download from the Oracle Technology Network (OTN).  What types of applications can I develop with Oracle Java ME Embedded 3.2? The Oracle Java ME Embedded 3.2 product is a full-featured embedded Java runtime supporting applications based on the IMP-NG application model, which is derived from the well-known MIDP 2 application model. The runtime supports execution of multiple concurrent applications, remote application management, versatile connectivity, and a rich set of APIs and features relevant for embedded use cases, including the ability to interact with peripheral I/O directly from Java applications. This rich feature set, coupled with familiar and best-in class software development tools, allows developers to quickly build and deploy sophisticated embedded solutions for a wide range of use cases. Target markets well supported by Oracle Java ME Embedded 3.2 include wireless modules for M2M, industrial and building control, smart grid infrastructure, home automation, and environmental sensors and tracking. What tools are available for embedded application development for Oracle Java ME Embedded 3.2? Along with the release of Oracle Java ME Embedded 3.2, Oracle is also making available an updated version of the Java ME Software Development Kit (SDK), together with plug-ins for the NetBeans and Eclipse IDEs, to deliver a complete development environment for embedded application development.  OK – sounds great! Where can I find out more? And how do I get started? There is a complete set of information, data sheet, API documentation, “Getting Started Guide”, FAQ, and download links available: For an overview of Oracle Embeddable Java, see here. For the Oracle Java ME Embedded 3.2 press release, see here. For the Oracle Java ME Embedded 3.2 data sheet, see here. For the Oracle Java ME Embedded 3.2 landing page, see here. For the Oracle Java ME Embedded 3.2 documentation page, including a “Getting Started Guide” and FAQ, see here. For the Oracle Java ME SDK 3.2 landing and download page, see here. Finally, to ask more questions, please see the OTN “Java ME Embedded” forum To get started, grab the “Getting Started Guide” and download the Java ME SDK 3.2, which includes the Oracle Java ME Embedded 3.2 device emulation.  Can I learn more about Oracle Java ME Embedded 3.2 at JavaOne and/or Java Embedded @ JavaOne? Glad you asked Both conferences, JavaOne and Java Embedded @ JavaOne, will feature a host of content and information around the new Oracle Java ME Embedded 3.2 product, from technical and business sessions, to hands-on tutorials, and demos. Stay tuned, I will post details shortly. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "Oracle Java ME Embedded", Connected, embedded, Embedded Java, Java Embedded @ JavaOne, JavaOne, Smart

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >