Search Results

Search found 67448 results on 2698 pages for 'data management'.

Page 168/2698 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • MBA versus MSIS

    - by user794684
    I am considering going back to school for my masters and I've been looking at several avenues I can take. I've been considering either an MBA or an MSIS degree. Overall I know that an MBA is going to give me a solid skill set that can help me become an executive. However they seem to be a dime a dozen these days and the University I can get into is good, but it's not exactly in the top 100 anything. My undergrad MINOR was in Business Information Systems. I'm rusty as hell, considering I haven't touched it, but an MSIS would be more in the direction of my past academic experience and seems to touch both on business management and IT. Question... With an MSIS will I just be a middleman? Will I really be an important person with a real skill set or will I merely be someone who isn't quite cut out to be a manager and who is clueless about the tech side? Is an MSIS degree going to give me a real chance to move up the pay scale quickly or am I better off learning programing, networking through another BS degree? What will give me more upward mobility career wise? An MBA or an MSIS?

    Read the article

  • Increase Performance and Agility with Oracle’s New Data Center Fabric Solutions

    - by Cinzia Mascanzoni
    Join this Webcas on  Tues., December 11, 2012 10 a.m. PT / 1 p.m. ET and hear from S.K. Vinod, Senior Director of Product Management, Oracle Virtual Networking products. He’ll show you how the fast, simple, and agile architecture of Oracle Fabric Interconnect provides dynamic network and storage connectivity to thousands of servers. You will see how to use Oracle Software Defined Network (SDN) to connect any resource on the data center fabric quickly—without incurring downtime or requiring network reconfiguration. With Oracle Virtual Networking products, you can: Streamline your data center connectivity Reduce complexity by 70% Cut infrastructure expenses by up to 50% Increase application performance up to 30x Provision new services and reconfigure resources in minutes  Simplify deployments with wire-once infrastructure  During the Webcast, you’ll also have the opportunity to chat directly with Oracle experts. Visit OPN's Server & Storage Systems Knowledge Zones anytime to learn about partner engagement, training, resources, and replays of other webcasts to jump start business.  You can also email us your questions. Unable to attend live? Register anyway – we'll send you the on-demand link to the Webcast!

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video

    - by pinaldave
    There are a few questions I often get asked. I wonder how interesting is that in our daily life all of us have to often need the same kind of information at the same time. Here is the example of the similar questions: How many user created tables are there in the database? How many non clustered indexes each of the tables in the database have? Is table Heap or has clustered index on it? How many rows each of the tables is contained in the database? I finally wrote down a very quick script (in less than sixty seconds when I originally wrote it) which can answer above questions. I also created a very quick video to explain the results and how to execute the script. Here is the complete script which I have used in the SQL in Sixty Seconds Video. SELECT [schema_name] = s.name, table_name = o.name, MAX(i1.type_desc) ClusteredIndexorHeap, COUNT(i.TYPE) NoOfNonClusteredIndex, p.rows FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] LEFT JOIN sys.partitions p ON p.OBJECT_ID = o.OBJECT_ID AND p.index_id IN (0,1) LEFT JOIN sys.indexes i1 ON i.OBJECT_ID = i1.OBJECT_ID AND i1.TYPE IN (0,1) WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name, p.rows ORDER BY schema_name, table_name Related Tips in SQL in Sixty Seconds: Find Row Count in Table – Find Largest Table in Database Find Row Count in Table – Find Largest Table in Database – T-SQL Identify Numbers of Non Clustered Index on Tables for Entire Database Index Levels, Page Count, Record Count and DMV – sys.dm_db_index_physical_stats Index Levels and Delete Operations – Page Level Observation What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – Get Date and Time From Current DateTime – SQL in Sixty Seconds #025 – Video

    - by pinaldave
    This is 25th video of series SQL in Sixty Seconds we started a few months ago. Even though this is 25th video it seems like we have just started this few days ago. The best part of this SQL in Sixty Seconds is that one can learn something new in less than sixty seconds. There are many concepts which are not new for many but just we all have 60 seconds to refresh our memories. In this video I have touched a very simple question which I receive very frequently on this blog. Q1) How to get current date time? Q2) How to get Only Date from datetime? Q3) How to get Only Time from datetime? I have created a sixty second video on this subject and hopefully this will help many beginners in the SQL Server field. This sixty second video describes the same. Here is a similar script which I have used in the video. SELECT GETDATE() GO -- SQL Server 2000/2005 SELECT CONVERT(VARCHAR(8),GETDATE(),108) AS HourMinuteSecond, CONVERT(VARCHAR(8),GETDATE(),101) AS DateOnly; GO -- SQL Server 2008 Onwards SELECT CONVERT(TIME,GETDATE()) AS HourMinuteSeconds; SELECT CONVERT(DATE,GETDATE()) AS DateOnly; GO Related Tips in SQL in Sixty Seconds: Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Get Time in Hour:Minute Format from a Datetime – Get Date Part Only from Datetime Get Current System Date Time Get Date Time in Any Format – UDF – User Defined Functions Date and Time Functions – EOMONTH() – A Quick Introduction DATE and TIME in SQL Server 2008 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Image Credit: Movie Gone in 60 Seconds Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Cross Platform Data Access with Xamarin & C# For iPhone, iPad, and Android - Local, Web Services, & Sql Server

    - by Wallym
    The following is a link to cross platform data access training with Xamarin & C#.   It is intended for use on iPhone, iPad, and Android devices.  The course covers local data in Sqlite, calling Web Services via REST and JSON, and calling Sql Server. Url: http://www.learnnowonline.com/course/cpx2/xamarin-cross-platform-data-access/  Course Data  Applications live on data. These applications can vary from an online social network service, to a company’s internal database, to simple data, and all points in between. This Course will focus on how to easily access data on the device, communicate back and forth with a web service, and then finally to a SQL server database. Outline Local Data (27:36) Introduction (00:36) Problem (01:57) Solution (02:01) LINQ (02:03) LINQ Status (00:48) SQLite (02:18) SQLite - .Net Developers (00:50) SQLite-net (01:07) SQLite-net Attributes (02:10) Getting Started (01:09) CRUD (01:05) SQLite Platforms (01:17) Demo: SQLite – Android (04:53) Demo: SQLite – iOS (04:56) Summary (00:20) Web Services Data (32:43) Introduction (00:19) Async Commands (03:15) HttpClient (01:26) HTTP Verbs (01:29) Notes (00:58) GET Operation (01:37) JSON.NET (01:50) Images (01:16) Other Http Verbs (01:27) Post (03:18) Demo: Http – iOS prt1 (05:26) Demo: Http – iOS prt2 (05:28) Demo: Http – Android (04:20) Summary (00:27) Direct Data (12:33) Introduction (00:23) Remote Data - Direct (02:47) Sql Server (01:15) Demo: Sql Server – iOS (04:15) Demo: Sql Server – Android (01:49) "codepage 1252 not supported" (01:03) Other Resources (00:43) Summary (00:15) Note: Thanks to Frank Kreuger for his data access library Sqlite-Net.  It is very helpful and I have used it in some other projects beyond just this training session.

    Read the article

  • SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072

    - by Pinal Dave
    Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced cardinality estimation in SQL Server 2014 and how it improves the performance of the query. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. I hope my earlier blog post clearly explained how new cardinality estimation logic improves performance. If not, I suggest you watch following quick video where I explain this concept in extremely simple words. You can download the code used in this course from Simple Demo of New Cardinality Estimation Features of SQL Server 2014. Action Item Here are the blog posts I have previously written. You can read it over here: Simple Demo of New Cardinality Estimation Features of SQL Server 2014 Pluralsight Course You can subscribe to my YouTube Channel for frequent updates. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Video

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by Brian Dayton
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage.   2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.."   3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what.   Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.  

    Read the article

  • An XML file or Database?

    - by webnoob
    I am re-writing a section of my site and am trying to decide how much of a rewrite this will be. At the moment I have a web service feed that generates an xml once per day. I then use this xml file on my website to generate the general structure. I am trying to decide if this information should be located in the database or stay in the xml file. The file can range from 4mb - 12mb. The files depth can go on and on so I have to recurse to find the data I want. I use the .NET serializer classes and store the serialized file in a global variable to avoid re-serializing it each time the page is loaded. My reasons for thinking a database would be better are: I would know exactly where I am in the file by using an internal ID so I wouldn't have to recurse the file to get information. I wouldn't have to load / serialize the XML and could just use my already open database connections. Searching for the data in the file would be quicker(?) as I would just perform an SQL query rather than re-cursing the file. Has anyone got any ideas which is better and which option uses more resources on the server or be quicker? EDIT: The file is read every time the web page is loaded (although only serialized once). It isn't written to by standard users (only by an admin task that runs in the middle of the night). This is my initial investigation before mocking up.

    Read the article

  • Is It Possible To Recover A Partial LVM Logical Volume?

    - by Terry Wang
    Background It is an Ubuntu 12.04 VirtualBox VM with 5 virtual HDDs (VDI), NOTE this is just a test VM, so not well planned ahead: ubuntu.vdi for / (/dev/mapper/ubuntu-root AKA /dev/ubuntu/root) and /home (/dev/mapper/ubuntu-home) weblogic.vdi - /dev/sdb (mounted on /bea for weblogic and other stuff) btrfs1.vdi - /dev/sdc (part of btrfs -m raid1 -d raid1 configuration) btrfs2.vdi - /dev/sdd (part of btrfs -m raid1 -d raid1 configuration) more.vdi - /dev/sde (added this virtual HDD because / ran out of inodes and it wasn't easy to figure out what to delete so as to free up inodes, so I just added the new virtual HDD, created PV, added it to existing volume group ubuntu, grew the root logical volume to work around the inode issue -_-) What happened? Last Friday, before finishing up I wanted to free up some disk space on that box, for some reason I thought the more.vdi was useless and tried to detach it from the VM, I then clicked delete (should have clicked keep files damn!) by mistake when detaching. Unfortunately I didn't have backup for it. All too late. What I have tried Tried to undelete (use testdisk and photorec) the vdi files but it takes too long and recovered heaps of .vdi files that I didn't want (huge, filled the disk, damn!). I finally gave up. Fortunately most of data is on separate ext4 partition and btrfs volumes. Out of curiosity, I still tried to mount the logical volumes and see if it is possible to at least recover the /var and /etc I tried to use system rescue cd to boot and activate the volume groups, I got: Couldn't find device with uuid xxxx. Refusing activation of the partial LV root. Use --partial to override. 1 logical volume(s) in volume group "ubuntu" now active. I was able to mount home LV but not root LV. I am wondering if it is possible to access the root LV any more. Under the bonnet, data (on LV root - /) was striped to more.vdi (PV), I know it's almost impossible to to recover. But I am still curious about how system administrator/DevOps guys deal with this sort of situation;-) Thanks in advance.

    Read the article

  • SQL SERVER – Changing Default Installation Path for SQL Server

    - by pinaldave
    Earlier I wrote a blog post about SQL SERVER – Move Database Files MDF and LDF to Another Location and in the blog post we discussed how we can change the location of the MDF and LDF files after database is already created. I had mentioned that we will discuss how to change the default location of the database. This way we do not have to change the location of the database after it is created at different locations. The ideal scenario would be to specify this default location of the database files when SQL Server Installation was performed. If you have already installed SQL Server there is an easy way to solve this problem. This will not impact any database created before the change, it will only affect the default location of the database created after the change. To change the default location of the SQL Server Installation follow the steps mentioned below: Go to Right Click on Servers >> Click on Properties >> Go to the Database Settings screen You can change the default location of the database files. All the future database created after the setting is changed will go to this new location. You can also do the same with T-SQL and here is the T-SQL code to do the same. USE [master] GO EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultData', REG_SZ, N'F:\DATA' GO EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultLog', REG_SZ, N'F:\DATA' GO What are the best practices do you follow with regards to default file location for your database? I am interested to know them. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Tools for modelling data and workflows using structured text files

    - by Alexey
    Consider a case when I want to try some idea of an application. But I want to avoid investing a lot of effort in coding UI/work flows/database schema etc before I see that it's going to be useful to me (as example of potential user). My idea is stay lightweight and put all the data in text files. So the components could be following: Domain objects are represented by text files or their fragments Domain objects are grouped by their type using directories Structure the files using some both human- and machine-friendly format, e.g. YAML Use some smart text editor (e.g. vim, emacs, rubymine) to edit and navigate those files Use color schemes and macros/custom commands of the text editor to effectively manipulate those files Use scripts (or a lightweight web framework like Sinatra) to try some business logic ideas on top of the data model The question is: Are there tools or toolkits that support or can be adopted to this approach? Also any ideas, links to articles/other knowledge sources are very welcome. And more specific question: What is the simplest way to index and update index of files with YAML files?

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • What is the Sarbanes-Oxley (SOX) Act?

    In 2002 after the wake of the Enron and World Com Financial scandals Senator Paul Sarbanes and Representative Michael Oxley lead the creation of the Sarbanes-Oxley Act. This act administered by the Securities and Exchange Commission (SEC) dramatically altered corporate financial practices and data governance. In addition, it also set specific deadlines for compliance. The Sarbanes-Oxley is not a set of standard business rules and does not specify how a company should retain its records; In fact, this act outlines which pieces of data are to be stored as well as the storage duration. The SOX act targets the financial side of companies, but its impacts can be seen within the technology arena as well because it is their responsibility to store all of a company’s electronic records regardless of file type. This act specifies that all records and electronic messages must be saved for no less than five years according to SearchCIO. In addition, consequences for non-compliance are fines, imprisonment, or both. Sarbanes-Oxley Act: Rules that affect the management of Electronic records according to SearchCIO. Allowed practices regarding destruction, alteration, or falsification of records. Retention period for records storage. Best practices indicate that corporations securely store all business records using the same guidelines set for public accountants. Types of business records that need to be stored Business Records  Business Communications Including Electronic Communications References: SOXLaw: The Sarbanes-Oxley Act 2002 Retrieved May 2011 from http://www.soxlaw.com/ SearchCIO: What is Sarbanes-Oxley Act (SOX)? Retrieved May 2011 from http://searchcio.techtarget.com/definition/Sarbanes-Oxley-Act

    Read the article

  • Preventing battery from charging

    - by intuited
    I'm running on UPS power and would like to prevent the laptop's battery from charging, to increase the amount of power available to other devices. Is there a way to do this? update The machine is a Dell Latitude D400. If people want more details, just ask. Also, I'm gathering that I need to explain my desired setup a little better. I've gotten a bunch of suggestions about taking the battery out. I'm not sure if people are suggesting to take the battery out while the machine is running — this, as I understand, is not a good idea with most laptops — or to just remove the battery altogether. The latter option is not optimal, because ideally I'd like to use the 30-60 minutes of power in the laptop battery and then switch over to UPS power. The details of the switch-over may constitute a separate question, but if I can't find a way to keep the laptop battery from charging, then removing the battery from the machine altogether may be the best way to do this. I'm not sure yet if this machine will run without a battery, but I'll check that out. Other than the laptop, the UPS is just supporting a cable modem and router and a USB hub. Again in the idealized version of this setup, all the power management changes would be automated, i.e. not require replugging anything or pressing Fn-keys. I'd like the machine to start using laptop battery power when apcupsd indicates that the UPS A/C is out, and then start using UPS power, but not charging the battery, when the battery is almost depleted.

    Read the article

  • Manage ClickOnce releases for different parties

    - by Dirk Beckmann
    I'm struggling with release management of a piece of software. First some general information: It is a ClickOnce application I follow the release often practice There are about 30 parties served with this software I need full control which update will be delivered to which party Not each party is allowed to get the latest update/release Each party has multiple clients that are all allowed to get the latest update, served for the specific party So that's what my requirements are in a rough description. So let me explain what I was thinking about how to solve this. I would like to create a "deployment" website (asp.net) that will handle all the requests There are two endpoints one for download the client and one where the client checks for updates So each party has a separate endpoint like DeploymentSite/party1 and another for DeploymentSite/party2 The Application Files should still be stored centralized So I thought it would be manageable with mage.exe with the following steps Build application and store new release into Application Files Repository/Folder Get parties that should be updated (config file, database what ever) Run mage.exe to create a new application and deployment manifest for each party in the update list with new Application File Location (1.0.2) Actually I'm really struggling with this mage.exe staff. I can't create the appropriate files with the needed codebase. How to handle thes requirements?

    Read the article

  • Microsoft MVP Award &ndash; Data Platform Development

    - by Dane Morgridge
    For those who don't already know, yesterday I received my first Microsoft MVP Award in Data Platform Development.  With less than 5,000 MVPs in the world overall and about 20 in the Data Platform category, saying I am honored would be an understatement.  From the first time I spoke at a code camp, I was totally hooked and have had a blast travelling around the east coast speaking at code camps and users groups.  I'd like to take the time to thank Dani Diaz (@danidiaz) for the nomination and everyone who supported me, especially my wife Lisa for letting me travel and speak as much as I have and putting up with me for late nights and such.  Roska Digital, my employer, also deserves a shout out for supporting me and giving me the necessary time off to get to speaking engagements.  With any luck, the next year will be at least as fun if not more than the last one has.  I hope to see you at a code camp or user group meeting soon! I would also like to send a congratulations to the other new Philly Area MVPs: John Angelini (@johnangelini) & Ned Ames (@nedames) You can find out more about the Microsoft MVP Award at https://mvp.support.microsoft.com/

    Read the article

  • Data transfer between"main" site and secured virtual subsite

    - by Emma Burrows
    I am currently working on a C# ASP.Net 3.5 website I wrote some years ago which consists of a "main" public site, and a sub-site which is our customer management application, using forms-based authentication. The sub-site is set up as a virtual folder in IIS and though it's a subfolder of "main", it functions as a separate web app which handles CRUD access to our customer database and is only accessible by our staff. The main site currently includes a form for new leads to fill in, which generates an email to our sales staff so they can contact them and convince them to become customers. If that process is successful, the staff manually enter the information from the email into the database. Not surprisingly, I now have a new requirement to feed the data from the new lead form directly into the database so staff can just check a box for instance to turn the lead into a customer. My question therefore is how to go about doing this? Possible options I've thought of: Move the new lead form into the customer database subsite (with authentication turned off). Add database handling code to the main site. (No, not seriously considering this duplication of effort! :) Design some mechanism (via REST?) so a webpage outside the customer database subsite can feed data into the customer database I'd welcome some suggestions on how to organise the code for this situation, preferably with extensibility in mind, and particularly if there are any options I haven't thought of. Thanks in advance.

    Read the article

  • How do managers know if a person is a good or a bad programmer?

    - by Pavel Shved
    In most companies that do programming teams and divisions consist of programmers who design and write code and managers who... well, do the management stuff. Aside from just not writing code, managers usually do not even look at the code the team develops, and may even have no proper IDE installed on their work machines. Still, the managers are to judge if a person works well, if he or she should be put in charge of something, or if particular developer should be assigned to a task of the most importance and responsibility. And last, but not least: the managers usually assign the quarterly bonuses! To do the above effectively, a manager should know if a person is a good programmer—among other traits, of course. The question is, how do they do it? They don't even look at the code people write, they can't directly assess the quality of the components programmers develop... but their estimates of who is a good coder, and who is "not as good" are nevertheless correct in most cases! What is the secret?

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • Oracle Number One in Supply Chain Planning

    - by Stephen Slade
    Something nice to write home about!  Saw this accomplishment and worth promoting, with special Congrats to the VCP team. Read on: Summary: Oracle is the #1 player in  Supply Chain Planning  according to research firm ARC Advisory Group Details: The report (Source: ARC Advisory Group, “Supply Chain Planning Worldwide Outlook, Market Analysis and Forecast through 2016,” Clint Reiser, Steve Banker), gives Oracle 21.1% of revenue share, compared to SAP, who was second at 18.6%. JDA Software, Aspen, Logility, and Infor were the next players in the market. The total market was valued at $1.506B. ARC counts Software (new license and upgrades), Implementation Services, Maintenance and Support, and SaaS, in its definition. ARC defines supply chain planning to include four key application areas: Extended SCP, Manufacturing Planning, Inventory/Distribution Planning, and Demand Management. Extended SCP consists of Network Design, Capable to Promise, SCP Composites, and Extended Supply Chain BI software. In the report, ARC further gives Oracle the number one spot in both Software Revenues and Services Revenues subsegments, as well as in many vertical areas such as Government, Electronics and Electrical, Medical Products, Pharmaceutical, and Wholesale/Distribution. ARC also issued a forecast, that predicts SCP revenue to grow from $1.506B in 2011 to $2.172B in 2016, with a CAGR of 7.6%. The report has several positive quotes about Oracle, including calling Oracle a “visionary,” and states that “Oracle has leveraged a broad set of home-grown and acquired offerings to create a comprehensive, integrated, yet modular suite with applicability to a wide range of industries,” Blog Link: http://blog.us.oracle.com/marketdata/?97119896  (shawn willett@oracle com)

    Read the article

  • Data Networks Visualized via Light Paintings [Video]

    - by ETC
    All around you are wireless data networks: cellular networks, Wi-Fi networks, a world of wireless communication. Check out this awesome video of network signals mapped over a cityscape. What would happen if you made a device that allowed you to map signal strength onto film? In the following video electronics tinkerers craft an LED meter and use it to paint onto long exposure photographs with phenomenal results. Immaterials: light painting Wi-Fi [via Make] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron Is the Forcefield Really On or Not? [Star Wars Parody Video] Google Updates Picasa Web Albums; Emphasis on Sharing and Showcasing Uwall.tv Turns YouTube into a Video Jukebox Early Morning Sunrise at the Beach Wallpaper Data Networks Visualized via Light Paintings [Video]

    Read the article

  • What You Said: How You Track Your Time

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite time tracking tips, tricks, and tools. Now we’re back to highlight the techniques HTG readers use to keep tabs on their time. While more than one of you expressed confusion over the idea of tracking how you spend all your time, many of you were more than happy to share the reasons for and the methods you use to stay on top of your time expenditures. Scott uses a fluid and flexible project management tool: I use kanbanflow.com, with two boards to manage task prioritisation and backlog. One board called ‘Current Work’ has three columns ‘Do Today’, ‘In Progress’ and ‘Done’. The other is called ‘Backlog’, which splits tasks into priority groups – ‘Distractions (NU+NI)’, ‘Goals (NU+I)’, ‘Interruptions (U+NI)’, ‘Interruptions (U+NI)’ and ‘Critical (U+I)’, where U is Urgent and I is Important (and N is Not). At the end of each day, I move things from my Backlog to my ‘Current Work’ board, with the idea to keep complete Goals before they become Critical. That way I can focus on ‘Current Work’ Do Today so I don’t feel overwhelmed and can plan my day. As priorities change or interruptions pop up, it’s just a matter of moving tasks between boards. I have both tabs open in my browser all day – this is probably good for knowledge workers strapped to their desk, not so good for those in meetings all day. In that case, go with the calendar on your phone. While the above description might make it sound really technical, we took the cloud-based app for a spin and found the interface to be very flexible and easy to use. Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >