Search Results

Search found 14251 results on 571 pages for 'identity management'.

Page 22/571 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Task management algorithm in C#

    - by silverwizz
    Hi guys, i am looking for efficient task management fo C# what i mean by task management is executing pre-defined interval time of task. Example: task a needs to be run every 1 mins task b needs to be run every 3 mins task c needs to be run every 5 mins these tasks can be added and removed in arbitary time... And the task that i mentioned can be 100000 or more... The task will bw executed forever until it is removed... Do u guys familiar with this kind of algorithm? I am thinking to implement in either c# or php.... Thanks

    Read the article

  • Document Management System - Where to Store Files?

    - by Diego AC
    Hey, stack! I'm on charge of building an ASP.NET MVC Document Management System. It have to be able to do basic document management tasks like adding, editing and searching entries and also perform versioning. Anyways, I'm targeting PDF, Office and many image formats as the file attached to each document entry in the database. My question is: What design guidelines do pros follow when building the storage mechanism? Do they store the document files in the file system? Database? How file uploading is handled? I used to upload the files to a temporal location while the user was editing the data and move it to permanent storage when the user confirmed the entry creation. Is this good? Any suggestions on improvement?

    Read the article

  • Project management estimating time, budget and product pricing

    - by dr_hoppa
    I have been a software developer for a while but was not interested in the above topics, currently I am put in the position of wanting to learn more about them but don't have a clue where to begin. I have done task estimations and I can do decent ones, but have little/none experience in the field of budget/product pricing and would want to learn more. Do you have any suggestions of good resources that I could look up in order to learn more (eg. books, blogs, ...) This need arised while talking to a friend about management and he brought up management terms that I wasn't aware of (eg. KPI, ...)

    Read the article

  • C++ memory management of reference types

    - by Russel
    Hello, I'm still a fairly novice programmer and I have a question about c++ memory management with refence types. First of all, my understanding of reference types: A pointer is put on the stack and the actual data that the pointer points to is created and placed on the heap. Standard arrays and user defined classes are refence types. Is this correct? Second, my main question is do c and c++'s memory management mechanisms (malloc, free and new, delete) always handle this properly and free the memory that a class or array is pointing to? Does everything still work if those pointers get reassigned somehow to other objects of the same size/type on the heap? What if a class has a pointer member that points to another object? I am assuming that delete/freeing the class object doesn't free what it's member pointer points to, is that correct? Thanks all! -R

    Read the article

  • Delphi Memory Management

    - by nomad311
    I haven't been able to find the answers to a couple of my Delphi memory management questions. I could test different scenarios (which I did to find out what breaks the FreeAndNil method), but its takes too long and its hard! But seriously, I would also like to know how you all (Delphi developers) handle these memory management issues. My Questions (Feel free to pose your own I'm sure the answers to them will help me too): Does FreeAndNil work for COM objects? My thoughts are I don't need it, but if all I need to do is set it to nil than why not stay consistent in my finally block and use FreeAndNil for everything? Whats the proper way to clean up static arrays (myArr : Array[0..5] of TObject). I can't FreeAndNil it, so is it good enough to just set it to nil (do I need to do that after I've FreeAnNil'd each object?)? Thanks Guys!

    Read the article

  • Which tool do you use for tracking daily/weekly progress on your long-term goals?

    - by NoCatharsis
    I read through this post and this post to get an idea of short-term task management applications out there. However, now I'm having a problem tracking my longer-term goals pertaining to things like career, education, and exercise. I just discovered Disciplanner which might meet my needs but haven't had time to set it up just yet. Not to mention, it took me about 2 weeks of playing with every online to-do list app I could find before I came to a decision on which to use. I'm very particular about my tasks and goal-setting, so I would prefer to get some advice from other superusers before diving into the web again to find another life-management website.

    Read the article

  • Oracle @ AIIM Conference

    - by [email protected]
    Oracle will be at the AIIM Conference and Exposition next week in Philadelphia. On the opening morning, Robert Shimp, Group Vice President, Global Technology Business Unit, of Oracle Corporation, will moderate an executive keynote panel. Mr. Shimp will lead four Oracle customer executives through a lively discussion of how innovative organizations are driving the integration of content management with their core business processes on Tuesday April 20th at 8:45 AM. Our panelists are: CINDY BIXLER, CIO, Embry Riddle Aeronautical University TOM SHOWALTER, Managing Director, JP Morgan Chase IRFAN MOTIWALA, Vice President, Moody's Investors Service MIT MONICA CROCKER, CRM, PMP, Corporate Records Manager, Land O'Lakes For more information on our panelists, click here. Oracle will be in booth #2113 at the AIIM Expo. Come by and enter the daily raffle to win a Netbook! Oracle and Oracle partners will demonstrate solutions that increase productivity, reduce costs and ensure compliance for business processes such as accounts payable, human resource onboarding, marketing campaigns, sales management, large scale diagrams for facilities and manufacturing, case management, and others Oracle products including Oracle Universal Content Management, Oracle Imaging and Process Management, Oracle Universal Records Management, Oracle WebCenter, Oracle AutoVue, and Oracle Secure Enterprise Search will be demonstrated in the booth. Oracle will host a private event at The Field House Sports Bar - see your Oracle representative for more details Oracle customers can meet in private meeting rooms with their Oracle representatives Key Sessions Besides the opening morning keynote panel, Oracle will have a number of other sessions at the conference. Oracle Content Management will be featured in the session G08 - A Passage to Improving Healthcare: Enhancing EMR with Electronic Records Wednesday April 21st 2:25PM-3:10PM Kristina Parma of Oracle partner ImageSource will deliver this session, along with Pam Doyle of Fujitsu and Nancy Gladish of Swedish Medical Center. Kristina will also be in the Oracle booth to talk about this solution. On Tuesday April 20th at 4:05 PM Ajay Gandhi of Oracle will deliver a session entitled Harnessing SharePoint Content for Enterprise Processes in PeopleSoft, Siebel, E-Business Suite and JD Edwards Tuesday April 20th 1:15PM-1:45PM - Bringing Content Management to Your AP, HR, Sales and Marketing Processes - Application Showcase Theater (on the AIIM Expo Floor - Booth 1549 Wednesday April 21st 12:30PM-1:00PM - Embed and Edit Content Anywhere - Application Showcase Theater (on the AIIM Expo Floor - Booth 1549 For more information, see the AIIM Expo page on the Oracle website.

    Read the article

  • When is the best time to do self learning in relation with software management?

    - by shankbond
    It all started from here. I have been following Software Estimation: Demystifying the Black Art (Best Practices (Microsoft)). The third chapter says that in Software Management: You cannot give too much time to software developers, if you give it to them, then it is likely that extra time given to them will be filled by some other tasks (in other words, the developers will eat that time :)) Parkinson's Law You can also not squeeze the time from their schedule because if you do that, it is likely that they will develop poor quality product, poor design and will hurt you in the long run, there will be a panic situation and total chaos in the project, lots of rework etc. My question is related to the first point. If you don't give enough time then will the typical software engineer learn his/her skills? The market is always coming with new technologies, you need to learn them. Even with the existing familiar technologies there are always best practices and dos and don'ts.

    Read the article

  • Project Management Tool for developers and sysadmins: shared or separate?

    - by David
    Should a team of system administrators who are on a software development project share a project management tool with the developers or use their own separate one? We use Trac and I see the benefit in sharing since inter-team tasks can be maintained by a single system where there may be cross-over or misfiled bugs (e.g. an apparent bug which turns out to be a server configuration issue or a development cycle which needs a server to be configured before it can start) However sharing could be difficult since many system administration tasks don't coincide with a single development milestone if at all. So should a system administration team use a separate PM Tool or share the same one with the developers? If they should share, then how?

    Read the article

  • How can I make a case for "dependency management"?

    - by C. Ross
    I'm currently trying to make a case for adopting dependency management for builds (ala Maven, Ivy, NuGet) and creating an internal repository for shared modules, of which we have over a dozen enterprise wide. What are the primary selling points of this build technique? The ones I have so far: Eases the process of distributing and importing shared modules, especially version upgrades. Requires the dependencies of shared modules to be precisely documented. Removes shared modules from source control, speeding and simplifying checkouts/check ins (when you have applications with 20+ libraries this is a real factor). Allows more control or awareness of what third party libs are used in your organization. Are there any selling points that I'm missing? Are there any studies or articles giving improvement metrics?

    Read the article

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Package management fails in update-manager with gzip problems and compilation errors. U12.04LTS

    - by HarveyP
    Similar to but not the same as Package management system corrupted. Cannot install or remove packages. U12.04LTS (an earlier problem) with package management system. Followed all of L. D. James suggestions in his answer to no avail. This time as well as the gzip error I am also getting compilation errors. The difference may be due to a lack of compilation in my earlier problem so it may be the same error. The packages concerned are enumerated in the output from update-manager below. Also included below that is the output from apt-get -f install apt-get autoremove gives same output. Tried update without SSL updates - 9 to install and got "Unhandled Error in aptdaemon". Output number 3 below. One at a time - output 4 - is for firefox, first in the list of packages. Falls over at libssl1.0.0 despite deselection of it from update ... Tried apt-get install --reinstall dpkg which succeeded, apt-get install --reinstall tar and apt-get install --reinstall gzip both of which failed at libssl1.0.0 as ever. (as suggested by Subv3rsion elsewhere in this forum) Now cannot apt-get update with complete success even after changing server and apt-get clean - output number 5 below ... 1). Output from update-manager The following packages will be upgraded:<> firefox firefox-globalmenu firefox-locale-en libavcodec-extra-53 libavformat53 libavutil-extra-51 libjson0 libpostproc52 libssl1.0.0 libswscale2 openssl 11 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade.<br> Need to get 0 B/46.5 MB of archives. After this operation, 1,416 kB of additional disk space will be used.<br> Do you want to continue [Y/n]? y debconf: Perl may be unconfigured (Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at (eval 1) line 3. BEGIN failed--compilation aborted at (eval 1) line 3. ) -- aborting (Reading database ... 160575 files and directories currently installed.) Preparing to replace libssl1.0.0 1.0.1-4ubuntu5.14 (using .../libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb) ... Unpacking replacement libssl1.0.0 ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:4>: data error' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb (--unpack):<br> subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports has already been reached Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at /usr/share/perl5/Debconf/Db.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Db.pm line 7. Compilation failed in require at /usr/share/debconf/frontend line 6. BEGIN failed--compilation aborted at /usr/share/debconf/frontend line 6. dpkg: error whale cleanang up: subprgcess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libssl1.0.0_1.0.1-4ubuntu5.15_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) 2). Output from install -f harveyp@harveyp:~$ sudo apt-get -f install [sudo] password for harveyp: Reading package lists... Done Building dependency tree Reading state information... Done 0 to upgrade, 0 to newly install, 0 to remove and 11 not to upgrade. 1 not fully installed or removed.<br> After this operation, 0 B of additional disk space will be used. E: Internal Error, No file name for libssl1.0.0 3). Unhandled error from aptdaemon Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1045, in _simulate trans.unauthenticated = self.__simulate(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1160, in __simulate unauthenticated = self._get_unauthenticated() File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 347, in _get_unauthenticated for pkg in self._iterate_packages(): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1356, in _iterate_packages for enum, pkg in enumerate(self._cache): File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 216, in __iter__ yield self[pkgname] File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 201, in __getitem__ pkg = self._weakref[key] = Package(self, self._cache[key]) KeyError: 'librqrcode-rubq-doc 4). output from update of firefox installArchives() failed: Error in function: < Setting up libssl1.0.0 (1.0.1-4ubuntu5.14) ... Bareword "gensym" not allowed while "strict subs" in use at /usr/lib/perl/5.14/IO/Handle.pm line 67. BEGIN not safe after errors--compilation aborted at /usr/lib/perl/5.14/IO/Handle.pm line 366. Compilation failed in require at /usr/lib/perl/5.14/IO/Seekable.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/Seekable.pm line 9. Compilation failed in require at /usr/lib/perl/5.14/IO/File.pm line 11. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/IO/File.pm line 11. Compilation failed in require at /usr/share/perl/5.14/FileHandle.pm line 9. Compilation failed in require at /usr/share/perl5/Debconf/Template.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Template.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Question.pm line 8. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Question.pm line 8. Compilation failed in require at /usr/share/perl5/Debconf/Config.pm line 7. BEGIN failed--compilation aborted at /usr/share/perl5/Debconf/Config.pm line 7. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. 5. output from apt-get update ...snip ... Hit http://ubuntu-archive.mirrors.free.org precise-security/multiverse Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/restricted Translation-en Hit http://ubuntu-archive.mirrors.free.org precise-security/universe Translation-en Fetched 368 kB in 6s (59.5 kB/s) W: Failed to fetch gzip:/var/lib/apt/lists/partial/ubuntu-archive.mirrors.free.org_ubuntu_dists_precise_universe_source_Sources Hash Sum mismatch E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Schema objects not visible in SQL Server Management Studio 2008

    - by Germ
    I'm experiencing a weird problem with a SQL login. When I connect to the server in Microsoft SQL Server Management Studio (2008) using this account, I cannot see any of the tables, stored procedures etc. that this account should have access to on a particular database. When I connect to the same server within Visual Studio (2008) with the same account everything is there. When I connect with the same account on a Virtual Machine everything is there. I've also had a co-worker connect to the server using the same login and he's able to view everything as well. I use Microsoft SQL Server Management Studio all day connecting to different servers and databases and I've never experienced this problem. Does anyone have any suggestions on how I can diagnose this problem? I've checked to make sure I don't have any Table filters etc. There's several database on this server and I'm able to see the correct tables that this account has access to in the other databases just fine. Running this query lists the tables I'm expecting to see. SELECT * FROM INFORMATION_SCHEMA.TABLES

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

  • Objective-C Getter Memory Management

    - by Marian André
    I'm fairly new to Objective-C and am not sure how to correctly deal with memory management in the following scenario: I have a Core Data Entity with a to-many relationship for the key "children". In order to access the children as an array, sorted by the column "position", I wrote the model class this way: @interface AbstractItem : NSManagedObject { NSArray * arrangedChildren; } @property (nonatomic, retain) NSSet * children; @property (nonatomic, retain) NSNumber * position; @property (nonatomic, retain) NSArray * arrangedChildren; @end @implementation AbstractItem @dynamic children; @dynamic position; @synthesize arrangedChildren; - (NSArray*)arrangedChildren { NSArray* unarrangedChildren = [[self.children allObjects] retain]; NSSortDescriptor* sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"position" ascending:YES]; [arrangedChildren release]; arrangedChildren = [unarrangedChildren sortedArrayUsingDescriptors:[NSArray arrayWithObject:sortDescriptor]]; [sortDescriptor release]; [unarrangedChildren release]; return [arrangedChildren retain]; } @end I'm not sure whether or not to retain unarrangedChildren and the returned arrangedChildren (first and last line of the arrangedChildren getter). Does the NSSet allObjects method already return a retained array? It's probably too late and I have a coffee overdose. I'd be really thankful if someone could point me in the right direction. I guess I'm missing vital parts of memory management knowledge and I will definitely look into it thoroughly.

    Read the article

  • Content Management Systems for Adaptive Content [closed]

    - by andrewap
    Content management systems (CMS) allow us to easily maintain blogs, news sites, general websites, and so on. Many of them are designed to manage pages of content, and provide tools to organize and customize how that content is displayed on the web. However, as explained by Mark Boulton in his Adaptive Content Management article, and by Karen McGrane in her talk on Adapting Ourselves to Adaptive Content, we are increasingly delivering content not just to the web, but also to other platforms and channels. We need tools to manage pieces of content with meaningful metadata attached. Create once, publish everywhere. The main idea is to store content cleanly, without intertwining it with presentation markup specific to the web. Because pieces of content is compartmentalized semantically, it can easily adapt to fit in different platforms and channels. Hence, it's called adaptive content. Let's look at a quick example to compare: Say I manage news articles and events. To create a news article, I would tell the CMS the type of content I'm creating, and be asked to fill in a form with individual fields tailored to news articles (e.g. headline, subtitle, full text, short snippet, and images). — i.e. pieces of content With a traditional web publishing tool, I would probably have had to create a new page under News, and then type in and format the news article in a blank WYSIWYG text editor. — i.e. pages of content As you can see, the first design allows me to individually specify content in its smallest semantic unit. When I want to display or consume it, the system can easily provide the pieces I need. So here's my question: Is there a CMS that is designed specifically with adaptive content in mind, and that is decoupled with the presentation layer? Note: This is not a discussion about the best CMS, or which CMS I should use. I am asking whether a very specific type of tool — CMS designed for adaptive content — exists for developers to use.

    Read the article

  • Intel MKL memory management and exceptions

    - by Andrew
    Hello everyone, I am trying out Intel MKL and it appears that they have their own memory management (C-style). They suggest using their MKL_malloc/MKL_free pairs for vectors and matrices and I do not know what is a good way to handle it. One of the reasons for that is that memory-alignment is recommended to be at least 16-byte and with these routines it is specified explicitly. I used to rely on auto_ptr and boost::smart_ptr a lot to forget about memory clean-ups. How can I write an exception-safe program with MKL memory management or should I just use regular auto_ptr's and not bother? Thanks in advance. EDIT http://software.intel.com/sites/products/documentation/hpc/mkl/win/index.htm this link may explain why I brought up the question UPDATE I used an idea from the answer below for allocator. This is what I have now: template <typename T, size_t TALIGN=16, size_t TBLOCK=4> class aligned_allocator : public std::allocator<T> { public: pointer allocate(size_type n, const void *hint) { pointer p = NULL; size_t count = sizeof(T) * n; size_t count_left = count % TBLOCK; if( count_left != 0 ) count += TBLOCK - count_left; if ( !hint ) p = reinterpret_cast<pointer>(MKL_malloc (count,TALIGN)); else p = reinterpret_cast<pointer>(MKL_realloc((void*)hint,count,TALIGN)); return p; } void deallocate(pointer p, size_type n){ MKL_free(p); } }; If anybody has any suggestions, feel free to make it better.

    Read the article

  • Memory management of objects returned by methods (iOS / Objective-C)

    - by iOSNewb
    I am learning Objective-C and iOS programming through the terrific iTunesU course posted by Stanford (http://www.stanford.edu/class/cs193p/cgi-bin/drupal/) Assignment 2 is to create a calculator with variable buttons. The chain of commands (e.g. 3+x-y) is stored in a NSMutableArray as "anExpression", and then we sub in random values for x and y based on an NSDictionary to get a solution. This part of the assignment is tripping me up: The final two [methods] “convert” anExpression to/from a property list: + (id)propertyListForExpression:(id)anExpression; + (id)expressionForPropertyList:(id)propertyList; You’ll remember from lecture that a property list is just any combination of NSArray, NSDictionary, NSString, NSNumber, etc., so why do we even need this method since anExpression is already a property list? (Since the expressions we build are NSMutableArrays that contain only NSString and NSNumber objects, they are, indeed, already property lists.) Well, because the caller of our API has no idea that anExpression is a property list. That’s an internal implementation detail we have chosen not to expose to callers. Even so, you may think, the implementation of these two methods is easy because anExpression is already a property list so we can just return the argument right back, right? Well, yes and no. The memory management on this one is a bit tricky. We’ll leave it up to you to figure out. Give it your best shot. Obviously, I am missing something with respect to memory management because I don't see why I can't just return the passed arguments right back. Thanks in advance for any answers!

    Read the article

  • Project management software, available options

    - by canni
    Hey, sorry for posting this here, I know that this question better suites into SuperUser, but I would like to know answers from developers point of view. I have been using Indefero for project management etc. for some time, but I found that Indefero limitations are too big for my team. I'm searching project-management software that best suites this needs: Open-Source, but I can consider commercial apps GIT integration is mandatory, best if it can support multiple repos per project Time-tracking, good if it can have Gannt chart connected with issues etc. Issue, milestone, task tracking Good if it can be integrated with Gitosis, or have similar repository access control It must have an option, to setup on our own server Markdown syntax support is mandatory (or easy way to install plugin for this etc.) Issue tagging will be and advantage It will be used by developers team by 99% of time, but it has to have some simple interface, that clients can fill up bug reports etc. per project. It does not have to fill all this needs, but good if it can :) What options do You know, and can recommend?

    Read the article

  • It's The End of Work as We Know It, But I Feel Fine

    - by Naresh Persaud
    If you are attending Open World this year, don't miss Amit Jasuja's session on trends in Identity Management. This session will take place on Monday October 1st in Moscone West at 10:45. You can join the conversation on Twitter as Amit Jasuja discusses the trends that are shaping Identity Management as a market and how Oracle is responding to these secular trends. Use hashtag OracleIDM. In addition, here’s a list of the sessions in the  Identity Management  track. In Amit's session, he will discuss how the workplace is changing. The pace of technology is accelerating and work is no longer a place but rather an activity. We are behaving socially in our professional lives and our professional responsibilities are encroaching on our social lives.  The net result is that we will need to change the way we work and collaborate. Work is anytime and anywhere. This impacts the dynamics of teams and how they access information and applications. Our teams span multiple organizations and "the new work order" means enabling the interaction and securing the experience. It is the end of work as we know it both economically and technologically. Join Amit for this session and you will feel much better about the changing workplace. 

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Security Newsletter – September Edition is Out Now

    - by Tanu Sood
      The September issue of Security Inside Out Newsletter is out now. This month’s edition offers a preview of Identity Management and Security events and activities scheduled for Oracle OpenWorld. Oracle OpenWorld (OOW) 2012 will be held in San Francisco from September 30-October 4. Identity Management will have a significant presence at Oracle OpenWorld this year, complete with sessions featuring technology experts, customer panels, implementation specialists, product demonstrations and more. In addition, latest technologies will be on display at OOW demogrounds. Hands-on-Labs sessions will allow attendees to do a technology deep dive and train with technology experts. Executive Edge @ OpenWorld also features the very successful Oracle Chief Security Officer (CSO) Summit. This year’s summit promises to be a great educational and networking forum complete with a contextual agenda and attendance from well known security executives from organizations around the globe. This month’s edition also does a deep dive on the recently announced Oracle Privileged Account Manager (OPAM). Learn more about the product’s key capabilities, business issues the solution addresses and information on key resources. OPAM is part of Oracle’s complete and integrated Oracle Identity Governance solution set. And if you haven’t done so yet, we recommend you subscribe to the Security Newsletter to keep up to date on Security news, events and resources. As always, we look forward to receiving your feedback on the newsletter and what you’d like us to cover in the upcoming editions.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >