Search Results

Search found 5213 results on 209 pages for 'worm regards'.

Page 104/209 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • The current state of a MERGE Destination for SSIS

    - by jamiet
    Hugo Tap asked me on Twitter earlier today whether or not there existed a SSIS Dataflow Destination component that enabled one to MERGE data into a table rather than INSERT it. Its a common request so I thought it might be useful to summarise the current state of play as regards a MERGE destination for SSIS. Firstly, there is no MERGE destination component in the box; that is, when you install SSIS no MERGE Destination will be available. That being said the SSIS team have made available a MERGE destination component via Codeplex which you can get from http://sqlsrvintegrationsrv.codeplex.com/releases/view/19048. I have never used it so cannot vouch for its usefulness although judging by some of the reviews you might not want to set your expectations too high. Your mileage may vary.   In the past it has occurred to me that a built-in way to provide MERGE from the SSIS pipeline would be highly valuable. I assume that this would have to be provided by the database into which you were merging hence in March 2010 I submitted the following two requests to Connect: BULK MERGE (111 votes at the time of writing) [SSIS] BULK MERGE Destination (15 votes) If you think these would be useful feel free to vote them up and add a comment. Lastly, this one is nothing to do with SSIS but if you want to perform a minimally logged MERGE using T-SQL Sunil Agarwal has explained how at Minimal logging and MERGE statement. @Jamiet

    Read the article

  • What is a Data Warehouse?

    Typically Data Warehouses are considered to be non-volatile in comparison to traditional databasesdue to the fact that data within the warehouse does not change that often.  In addition, Data Warehouses typically represent data through the use of Multidimensional Conceptual Views that allow data to be extracted based on the view and the current position within the view. Common Data Warehouse Traits Relatively Non-volatile Data Supports Data Extraction and Analysis Optimized for Data Retrieval and Analysis Multidimensional Views of Data Flexible Reporting Multi User Support Generic Dimensionality Transparent Accessible Unlimited Dimensions of Data Unlimited Aggregation levels of Data Normally, Data Warehouses are much larger then there traditional database counterparts due to the fact that they store the basis data along with derived data via Multidimensional Conceptual Views. As companies store larger and larger amounts of data, they will need a way to effectively and accurately extract analysis information that can be used to aide in formulating current and future business decisions. This process can be done currently through data mining within a Data Warehouse. Data Warehouses provide access to data derived through complex analysis, knowledge discovery and decision making. Secondly, they support the demands for high performance in regards to analyzing an organization’s existing and current data. Data Warehouses provide support for an organization’s data and acquired business knowledge.  Within a Data Warehouse multiple types of operations/sub systems are supported. Common Data Warehouse Sub Systems Online Analytical Processing (OLAP) Decision –Support Systems (DSS) Online Transaction Processing (OLTP)

    Read the article

  • Restart problem after installing graphical card driver

    - by Tim
    My laptop is Lenovo T400, running Ubuntu 10.10. My problem: I just run jockey-gtk and installed ATI/AMD proprietary FGLRX graphics driver. But after reboot, there is a short period of graphical "Ubuntu" and then instead of starting X-window it completely changed to command line to ask me to login. Even after login and then issuing xinit, it still failed to start X. To solve this problem: I followed this post, where one person suggested to you can simply write sudo apt-get remove fglrx This worked for me. If it doesn't work, then try sudo apt-get remove xorg-driver-fglrx and restart. I actually don't need the driver anyway, so I issued the first command after login under command line. But after reboot, the situation is even worse, and there is now even no command-line interface to ask me login, instead the screen is completely blank with just some ambient light in the background and Ubuntu is hanging there probably forever. So I have no chance to try the second command the person suggested. I was wondering what I can do now to solve my problem? Thanks and regards!

    Read the article

  • Is "White-Board-Coding" inappropriate during interviews?

    - by Eoin Campbell
    This is a somewhat subjective quesiton but I'd love to hear feedback/opinions from either interviewers/interviewees on the topic. We split our technical part into 4 parts. Write Code, Read & Analyse Code, Design Session & Code on the white board. For the last part what we ask interviewees to do is write a small code snippet (4-5 lines) on the whiteboard and explain as they go through it. Let me be clear the purpose is not to catch people out. We're not looking for perfect syntax. Hell it can even be pseudo-code. but the point is to give them a very simple problem and see if their brain can communicate the solution to us. By simple problems I mean "Reverse a string", "FizzBuzz" etc... EDIT Just with regards the comment about Pseudo-Code. We always ask for an explicit language first. We;re a .NET C# house. we've only said "pseudo-code" where someone has been blanking/really struggling with the code. My question is "Is it innappropriate / unreasonable to expect a programmer to write a code snippet on a whiteboard during an interview ?"

    Read the article

  • Review: Data Modeling 101

    I just recently read “Data Modeling 101”by Scott W. Ambler where he gave an overview of fundamental data modeling skills. I think this article was excellent for anyone who was just starting to learn or refresh their skills in regards to the modeling of data.  Scott defines data modeling as the act of exploring data oriented structures.  He goes on to explain about how data models are actually used by defining three different types of models. Types of Data Models Conceptual Data Model  Logical Data Model (LDMs) Physical Data Model(PDMs) He further expands on modeling by exploring common data modeling notations because there are no industry standards for the practice of data modeling. Scott then defines how to actually model data by expanding on entities, attributes, identities, and relationships which are the basic building blocks of data models. In addition he discusses the value of normalization for redundancy and demoralization for performance. Finally, he discuss ways in which Developers and DBAs can become better data modelers through the use of practice, and seeking guidance from more experienced data modelers.

    Read the article

  • Cloning existing software for commercial purposes - legal implications

    - by user2036256
    I have been asked to clone some existing software for a company. Basically its an old 16 bit DOS console app, which was supplied free of charge in I believe the late 80's. Having replaced the machine that needs to run it with a box running Win7 x64 they can't get it to work. It crashes every couple of minutes under DOSbox. The company that supplied it appears to no longer exist - if they did the company asking me to do this would almost certainly know about it. Its undetermined whether they have gone entirely or are just trading under a different name. If the latter they seem to have withdrawn from the market related to this product (because again, niche area, we should know about everyone there). What is the status to this with regards to copyright etc.? The main concern for the company involved is they want an identical interface to what they already have so I would have to clone this entirely. Having no source code / indication of the underlying mechanisms these would be written from scratch. Is an interface covered by copyright? / Does that still hold 30 years later? What is the assumed license when none at all is provided? Under UK law would I be under any serious risk were I to take on the project? How would this pan out if I then decided to sell the software on to other companies? Thanks

    Read the article

  • Lessons From OpenId, Cardspace and Facebook Connect

    - by mark.wilcox
    (c) denise carbonell I think Johannes Ernst summarized pretty well what happened in a broad sense in regards to OpenId, Cardspace and Facebook Connect. However, I'm more interested in the lessons we can take away from this. First  - "Apple Lesson" - If user-centric identity is going to happen it's going to require not only technology but also a strong marketing campaign. I'm calling this the "Apple Lesson" because it's very similar to how Apple iPad saw success vs the tablet market. The iPad is not only a very good technology product but it was backed by a very good marketing plan. I know most people do not want to think about marketing here - but the fact is that nobody could really articulate why user-centric identity mattered in a way that the average person cared about. Second - "Facebook Lesson" - Facebook Connect solves a number of interesting problems that is easy for both consumer and service providers. For a consumer it's simple to log-in without any redirects. And while Facebook isn't perfect on privacy - no other major consumer-focused service on the Internet provides as much control about sharing identity information. From a developer perspective it is very easy to implement the SSO and fetch other identity information (if the user has given permission). This could only happen because a major company just decided to make a singular focus to make it happen. Third - "Developers Lesson" -  Facebook Social Graph API is by far the simplest API for accessing identity information which also is another reason why you're seeing such rapid growth in Facebook enabled Websites. By using a combination of URL and Javascript - the power a single HTML page now gives a developer writing Web applications is simply amazing. For example It doesn't get much simpler than this "http://api.facebook.com/mewilcox" for accessing identity. And while I can't yet share too much publicly about the specifics - the social graph API had a profound impact on me in designing our next generation APIs.  Posted via email from Virtual Identity Dialogue

    Read the article

  • JOIN THE ORACLE Fusion Middleware Summer Camps

    - by mseika
    JOIN THE ORACLE Fusion Middleware Summer Camps For Specialized partners who are working on following projects & opportunities, we offer these advanced summer camps: - BPM Suite 11 - ADF 11g - WebCenter Portal - WebLogic 12c - SOA Suite 11g - ADF for BPM Suite 11 - WebCenter Sites 11g All training sessions will be from HQ product management and our PTS team. The sessions will take place in July in Lisbon Portugal and Munich Germany. . Participation is limited to two people per company and bootcamp. Registration is handled by first come first serve, please pay attention to the skill requirements, the pre-requisitions and the follow up! We will not accept people onto the training who do not match the criteria! Lisbon: Monday, July 9th 11:00AM - Friday July 13th 16:00 PM (Lisbon time) - ADF 11g advanced training by Grant Ronald and Frank Nimphius - WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata - WebLogic 12c training by Cosmin Tudor Munich: Monday, July 16th 11:00 AM - Wednesday July 18th 16:00 PM (CET) - ADF for BPM Suite 11g advanced training by David Read - WebCenter Sites 11g advanced training by Product Management & PTS Cost: Free of charge, cancelation or no-show fee 2.000€ Bootcamps are limited to 20 persons first come first serve For details and registration please visit Lisbon registration page: & Munich registration page Quotes summer camps 2011 “From zero to hero with this BPM workshop” Steven Boon, Ordina Linkedin “This is the training that prepares for real projects and POCs” Jon Petter Hjulstad, eVita – blog & twitter SOA & BPM Partner Community registration Please first login at http://partner.oracle.com and then visit: http://www.oracle.com/goto/emea/soa. If you have any questions please contact the Oracle Partner Business Center. If you have questions please feel free to contact us any time! Best regards Jürgen KressOracle EMEA SOA & BPM Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • Materialized View does not import properly when importing on a second instance of a database

    - by marinus
    When I import a database with materialized view mv_mt in just one database (Oracle) everything is ok. create materialized view mv_mt refresh complete next trunc( sysdate ) + 1 as SELECT sysdate, media_type.* from media_type; But when I try to import the same database to a copy in another schema I get the following errors: IMP-00017: following statement failed with ORACLE error 1: "BEGIN DBMS_JOB.ISUBMIT(JOB=438,WHAT='dbms_refresh.refresh(''"ALEXANDRA"" "."MV_MT"'');',NEXT_DATE=TO_DATE('2012-07-02:14:22:36','YYYY-MM-DD:HH24:MI:" "SS'),INTERVAL='sysdate + 1 / 24 / 60 / 6 ',NO_PARSE=TRUE); END;" IMP-00003: ORACLE error 1 encountered ORA-00001: unique constraint (SYS.I_JOB_JOB) violated ORA-06512: at "SYS.DBMS_JOB", line 100 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23421: "BEGIN dbms_refresh.make('"ALEXANDRA"."MV_MT"',list=null,next_date=null," "interval=null,implicit_destroy=TRUE,lax=FALSE,job=438,rollback_seg=NUL" "L,push_deferred_rpc=TRUE,refresh_after_errors=FALSE,purge_option = 1,par" "allelism = 0,heap_size = 0); END;" IMP-00003: ORACLE error 23421 encountered ORA-23421: job number 438 is not a job in the job queue ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86 ORA-06512: at "SYS.DBMS_IJOB", line 793 ORA-06512: at "SYS.DBMS_REFRESH", line 86 ORA-06512: at "SYS.DBMS_REFRESH", line 62 ORA-06512: at line 1 IMP-00017: following statement failed with ORACLE error 23410: "BEGIN dbms_refresh.add(name='"ALEXANDRA"."MV_MT"',list='"ALEXANDRA"."MV" "_MT"',siteid=0,export_db='ORCL01'); END;" IMP-00003: ORACLE error 23410 encountered ORA-23410: materialized view "ALEXANDRA"."MV_MT" is already in a refresh group ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.DBMS_IREFRESH", line 484 ORA-06512: at "SYS.DBMS_REFRESH", line 140 ORA-06512: at "SYS.DBMS_REFRESH", line 125 ORA-06512: at line 1 Anyone any ideas? Regards, Marinus

    Read the article

  • Is QtQuick.Controls available on Ubuntu 13.10

    - by javascript is future
    I was looking to do UI development in QML, and I really want it to look native. I found the QtQuick.Controls (http://qt-project.org/doc/qt-5.1/qtquickcontrols/qtquickcontrols-index.html), but when I try make a simple application, it tells me that QtQuick.Controls isn't installed. main.qml: import QtQuick 2.1 import QtQuick.Controls 1.0 Rectangle { height: 200 width: 200 } terminal: $ qmlscene main.qml file:///tmp/main.qml:2 module "QtQuick.Controls" is not installed Also, I downloaded the source from https://qt.gitorious.org/qt/qtquickcontrols/source/stable, ran qmake && make, but this returned the following output: cd src/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /tmp/qtquickcontrols/src/src.pro -o Makefile ) && make -f Makefile make[1]: Går til katalog '/tmp/qtquickcontrols/src' cd controls/ && ( test -e Makefile || /usr/lib/i386-linux-gnu/qt5/bin/qmake /tmp/qtquickcontrols/src/controls/controls.pro -o Makefile ) && make -f Makefile make[2]: Går til katalog '/tmp/qtquickcontrols/src/controls' g++ -c -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -O2 -fvisibility=hidden -fvisibility-inlines-hidden -std=c++0x -fno-exceptions -Wall -W -D_REENTRANT -fPIC -DQT_NO_XKB -DQT_NO_EXCEPTIONS -D_LARGEFILE64_SOURCE -D_LARGEFILE_SOURCE -DQT_NO_DEBUG -DQT_PLUGIN -DQT_QUICK_LIB -DQT_QML_LIB -DQT_WIDGETS_LIB -DQT_NETWORK_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I/usr/share/qt5/mkspecs/linux-g++ -I. -I/usr/include/qt5 -I/usr/include/qt5/QtQuick -I/usr/include/qt5/QtQml -I/usr/include/qt5/QtWidgets -I/usr/include/qt5/QtNetwork -I/usr/include/qt5/QtGui -I/usr/include/qt5/QtGui/5.1.1 -I/usr/include/qt5/QtGui/5.1.1/QtGui -I/usr/include/qt5/QtCore -I/usr/include/qt5/QtCore/5.1.1 -I/usr/include/qt5/QtCore/5.1.1/QtCore -I.moc/release-shared -o .obj/release-shared/qquickaction.o qquickaction.cpp qquickaction.cpp:49:39: fatal error: private/qguiapplication_p.h: No such file or directory #include <private/qguiapplication_p.h> ^ Is there some PPA I could use, or do I have to wait for Trusty to get out, before I can use native controls from Qt? Regards

    Read the article

  • Webserver not giving the correct response on CURL and other httprequest methods [migrated]

    - by Maxim
    I am trying to make a REST request to a external webserver by using this code <?php $user = 'USER'; $pass = 'PASS'; $data = "MYDATA" $ch = curl_init('URL'); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Content-Type: application/json', 'Content-Length: ' . strlen($data)) ); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_VERBOSE, true); if(!($res = curl_exec($ch))) { echo('[cURL Failure] ' . curl_error($ch)); } curl_close($ch); echo($res); Now this is a CURL request, however i tried different methods to test my result and they all give me a 403 forbidden error response that i get from the webserver, however i do get a 200 response when i run it on any other webserver (localhost, webserver2, ...) Therefore i think there is something wrong with my webserver and it might be disallowing/caching the post parameters that i provide because sometimes it returns a 200 response but most of the times it returns the 403. This is the response i get : HTTP/1.1 403 Forbidden Accept-Ranges: bytes Content-Type: application/json; charset=UTF-8 Date: Sat, 26 Oct 2013 13:56:37 GMT Server: Restlet-Framework/2.1.3 Vary: Accept-Charset, Accept-Encoding, Accept-Language, Accept Content-Length: 77 Connection: keep-alive {"error":"ForbiddenOperationException","errorMessage":"Invalid credentials."} It says Invalid credentials however i provide the correct credentials and i can confirm them because it is working on other servers. Since this is a crucial part of my script that i use for clients to register i assume that there is something wrong with the post parameters. I am running cpanel and uninstalled the following already: - varnish - apachebooster i also recompiled php already and enabled curl and its dependencies but nothing seems to resolve my problem. If more information is required then don't hesitate to ask me in the comments i will respond very quickly as i really need this. any help is appreciated. Kind regards Maxim

    Read the article

  • What to Return with Async CRUD methods

    - by RualStorge
    While there is a similar question focused on Java, I've been in debates with utilizing Task objects. What's the best way to handle returns on CRUD methods (and similar)? Common returns we've seen over the years are: Void (no return unless there is an exception) Boolean (True on Success, False on Failure, exception on unhandled failure) Int or GUID (Return the newly created objects Id, 0 or null on failure, exception on unhandled failure) The updated Object (exception on failure) Result Object (Object that houses the manipulated object's ID, Boolean or status field to with success or failure indicated, Exception information if there was one, etc) The concern comes into play as we've started moving over to utilizing C# 5's Async functionality, and this brought the question up of how we should handle CRUD returns large-scale. In our systems we have a little of everything in regards to what we return, we want to make these returns standardized... Now the question is what is the recommended standard? Is there even a recommended standard yet? (I realize we need to decide our standard, but typically we do so by looking at best practices, see if it makes sense for us and go from there, but here we're not finding much to work with)

    Read the article

  • Leveraging .Net 4.0 Framework Tools For Encrypting Web Configuration Sections

    - by Sam Abraham
    I would like to share a few points with regards to encrypting web configuration sections in .Net 4.0. This information is also applicable to .Net 3.5 and 2.0. Two methods can work perfectly for encrypting connection strings in a Web project configuration file:   1-Do It All Yourself! In this approach, helper functions for encrypting/decrypting configuration file content are implemented. Program would explicitly retrieve appropriate content from configuration file then decrypt it appropriately.  Disadvantages of this implementation would be the added overhead for maintaining the encryption/decryption code as well the burden of always ensuring sections are appropriately decrypted before use and encrypted appropriately whenever edited.   2- Leverage the .Net 4.0 Framework (The Way to go!) Fortunately, all needed tools for protecting configuration files are built-in to the .Net 2.0/3.5/4.0 versions with very little setup needed. To encrypt connection strings, one can use the ASP.Net IIS Registration Tool (Aspnet_regiis.exe). Note that a 64-bit version of the tool also exists under the Framework64 folder for 64-bit systems. The command we need to encrypt our web.config file connection strings is simply the following:   Aspnet_regiis –pe “connectionstrings” –app “/sampleApplication” –prov “RsaProtectedConfigurationProvider”   To later decrypt this configuration section:   Aspnet_regiis –pd “connectionstrings” –app “/SampleApplication”   The following is a brief description of the command line options used in the example above. Aspnet_regiis supports many more options which you can read about in the links provided for reference below.   Option Description -pe  Section name to encrypt -pd  Section name to decrypt -app  Web application name -prov  Encryption/Decryption provider   ASP.Net automatically decrypts the content of the Web.Config file at runtime so no programming changes are needed.   Another tool, aspnet_setreg.exe is to be used if certain configuration file sections pertinent to the .Net runtime are to be encrypted. For more information on when and how to use aspnet_setreg, please refer to the references below.   Hope this helps!   Some great references concerning the topic:   http://msdn.microsoft.com/en-us/library/ff650037.aspx http://msdn.microsoft.com/en-us/library/zhhddkxy.aspx http://msdn.microsoft.com/en-us/library/dtkwfdky.aspx http://msdn.microsoft.com/en-us/library/68ze1hb2.aspx

    Read the article

  • nvidia graphics resolution problem

    - by Deepak Adhikari
    I am currently using ubuntu 12.04 I have acer aspire timelinex 3830tg with 2GB nvidia GeForce GT540M graphics card To enable my graphics card I followed following steps. 1.) I activated nvidia_current and nvidia_current_updates from additional drivers 2.) sudo nvidia-xconfig 3.) then reboot Following these steps I got following errors 1.) my resolution is 640x480...(there is no option of 1366x768 in display...previously there was 1366x768 when nvidia-xconfig command was not entered) 2.) when I open nvidia-settings it shows me following error "You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run 'nvidia-xconfig' as root) and restart the X server." Problem need to be solved 1.) Change resolution to 1366x768 2.) Also how to check my nvidia graphics working or not Please some one please help me to solve these issues...I am seriously in need of my graphics card... I wan't my nvidia graphics card work as my intel graphics smoothly I am not willing to use bumblebee with regards, ubuntu user

    Read the article

  • Who should write the test plan?

    - by Cheng Kiang
    Hi, I am in the in-house development team of my company, and we develop our company's web sites according to the requirements of the marketing team. Before releasing the site to them for acceptance testing, we were requested to give them a test plan to follow. However, the development team feels that since the requirements came from the requestors, they would have the best knowledge of what to test, what to lookout for, how things should behave etc and a test plan is thus not required. We are always in an argument over this, and developers find it a waste of time to write down things like:- Click on button A. Key in XYZ in the form field and click button B. You should see behaviour C. which we have to repeat for each requirement/feature requested. This is basically rephrasing what's already in the requirements document. We are moving towards using an Agile approach for managing our projects and this is also requested at the end of each iteration. Unit and integration testing aside, who should be the one to come up with the end user acceptance test plan? Should it be the reqestors or the developers? Many thanks in advance. Regards CK

    Read the article

  • Difference between bug, defect and flaw

    - by Hossein
    I was reading "Software Security: Building Security In" and in the first chapter I faced with 3 terms: bug, defect and flaw. The author gave a definition for each of them but I couldn't completely understand these. Can someone give me some examples for each term? What is a defect and what is a flaw? I think I know what bug is, a bug is a malfunction of a part of system which produces undesirable result, be it crashing on a wrong input or miscalculating a series of computations. Can someone elaborate more and correct me if I am wrong in this? UPDATE To be more precise in the book I mentioned above, they (the words) are presented in a way to make a distinction, that's why I am asking to know more. In that book there are some examples denoting which sample belongs to what and which category. For example: Buffer overflow is said to be a bug and issues in method overriding (subclassing issues) is being related to flaw category. Again race condition handling issues are considered bugs and Error-handling problems (fails open) are told to be flaws! I want more elaboration on these regards.

    Read the article

  • StreamInsight Now Available Through Microsoft Update

    - by Roman Schindlauer
    We are pleased to announce that StreamInsight v1.1 is now available for automatic download and install via Microsoft Update globally. In order to enable agile deployment of StreamInsight solutions, you have asked of us a steady cadence of releases with incremental, but highly impactful features and product improvements. Following our StreamInsight 1.0 launch in Spring 2010, we offered StreamInsight 1.1 in Fall 2010 with implicit compatibility and an upgraded setup to support side by side installs. With this setup, your applications will automatically point to the latest runtime, but you still have the choice to point your application back to a 1.0 runtime if you choose to do so. As the next step, in order to enable timely delivery of our releases to you, we are pleased to announce the support for automatic download and install of StreamInsight 1.1 release via Microsoft Update starting this week. If you have a computer: that is subscribed to Microsoft Update (different from Windows Update) has StreamInsight 1.0 installed, and does not yet have StreamInsight 1.1 installed, Microsoft Update will automatically download and install the corresponding StreamInsight 1.1 update side by side with your existing StreamInsight 1.0 installation – across all supported 32-bit and 64-bit Windows operating systems, across 11 supported languages, and across StreamInsight client and server SKUs. This is also supported in WSUS environments, if all your updates are managed from a corporate server (please talk to the WSUS administrator in your enterprise). As an example, if you have SI Client 1.0 DEU and SI Server 1.0 ENU installed on the same computer, Microsoft Update will selectively download and side-by-side install just the SI Client 1.1 DEU and SI Server 1.1 ENU releases. Going forward, Microsoft Update will be our preferred mode of delivery – in addition to support for our download sites, and media based distribution where appropriate. Regards, The StreamInsight Team

    Read the article

  • Rankings dropping after small URL-change WITH 301-redirect

    - by David
    Two weeks ago, we attempted to make the URLs of ca. 12 pages more search-engine friendly. We changed three things. 1. Make URLs more SEF from: /????-????/brandname.html (meaning: /aircon-price/daikin.html to: /????-brandnameinenglish-brandnameinthai.html We set up 301-redirects from the old to the new URLs. You can find an example and the link to our page here: http://bit.ly/XRoTOK There are no direct external links to the old URLs. 2. Added text to img-links from homepage to brand-pages Before those changes, we only linked to those brands with a picture, so we added some text under the picture. You can see that here, in the left submenu: http://bit.ly/XRpfoF 3. Minor changes to Title, h1-Tags, Meta Description, etc. Only minor changes, to better match the on-site optimization with targeted keywords. For example, before we used full brand names, after we used what was really searched for: from: Mitsubishi Electric Mr. Slim to: ???? Mitsubishi (means: Aircon Mitsubishi) Three days after these changes, we noticed a heavy drop (80% loss in non-paid search traffic) in rankings and traffic for those pages, and also for all pages which are sub-categorized. Rankings for all keywords not affected by the changes stayed the same. Any ideas, what happened, and how we can regain our old rankings? What we already did, was submitting a new sitemap. Help much appreciated. Best regards, David

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • Cat 6 Only 100mbit speed

    - by Stu2000
    I tried two different cat6 cables directly connected between my two ubuntu machines. This one I ordered online: http://www.amazon.co.uk/gp/product/B002SQPDXS/ref=wms_ohs_product only achieves 100mbit speeds, but does appear to be supporting cross-talk (direct pc to pc), the other cat 6 cable, worked perfectly and gets the full 1gigabit speed. Both tests were performed using ftp and checking the network monitor with direct pc to pc connection. Did the product from amazon lie to me or do I need to manually set a setting somewhere in ubuntu for some cables? I had thought 10 quid for 20m of gigabit ethernet cable was a bit cheap, you get what you pay for... Regards, Stu Update: It seems that after rebooting, the device is set to 1000mbit sec when looking it up with sudo ethtool eth0 However after a while, this will drop down to just 100, after which to reset it to 1000 again, I have to reboot, and simply unpugging and re-plugging in the cable doesn't do it. I tried setting this in networking config file as suggested here: auto eth0 iface eth0 inet static pre-up /usr/sbin/ethtool -s eth0 speed 1000 duplex full but that resulted in my networking failing to start. Is there a problem with my 'auto-negotiation' or something? Can I manually override a setting to 1000mbit?

    Read the article

  • Ubuntu 10.04 LTS Server - fresh install - failed apt-get update

    - by user87227
    Good day and greetings to all, I just did a fresh installation of Ubuntu 10.04 LTS server without any issues. However, the apt-get update or aptitude update is giving the following errors: a. bzip2:(stdin) is not bzip2 file.ign for all lines plus the following errors : etched 3,582B in 0s (74.1kB/s) Reading package lists... W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //security.ubuntu.com lucid-security Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //in.archive.ubuntu.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: in.archive.ubuntu.com lucid-updates Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch security.ubuntu.com/ubuntu/dists/lucid-security/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid-updates/Release W: Some index files failed to download, they have been ignored, or old ones used instead. Please guide in resolving this error. TIA Regards Venu

    Read the article

  • Clustering and custom applications

    - by Ahmed ilyas
    I was not entirely sure what tags to put but hope this is ok. This is just a general question in regards to clustering and applications: so lets say we have a clustered environment setup. We cluster SQL Server (I dont know exactly how its done but lets just say its been done for the sake of argument). Now if a website or application is trying to access that database for read/write (say an ASP.NET app or a C# Winforms app) and during that time SQL goes down - it takes a couple of minutes for the clustering failover to take affect to switch to another node. What happens during this time? I think it will time out/unable to connect. BUT is there a way for it to place the request in some pipeline so when the cluster node is back up/switched over it will continue as normal? as you can see, I know nothing much about clustering! what about your own custom .NET apps? Would there be a special way to develop them? I know that you can say create a simple Hello world app, and cluster that but they wouldnt be something you could see interms of the UI or anything, so they would effectively need to be developed as a Windows Service perhaps or even as a standard Console app which runs and not wait for user input but you wouldnt see any output from it (unless you redirect output to somewhere else) What im getting at here is... for those who have experience or developed a cluster application in .NET, how did you do it and what are the things to be aware of? For example we have the cloud service - fundamentally its built on clustering - if there is an outage, another node takes place and service is resumed as normal but we dont really see much of that downtime.

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Webserver on a rotating server with NAT IP or changing IPs

    - by hpsoftware
    i would have to elaborate my questions so please have patience Explaining the logic. if you are familiar with logmein then it installs a client software on your computer then it kinda keeps tracks where you computer is as long as it's connected to internet. So you can always access your computer no matter where it is whatever it's IP is you just go to logmein.com and then you can just access it. Now what i am asking 1. Let's assume i have a website hosted on my laptop let's call it webserver. so then i move around i have a new IP sometime even on a hotel network is it possible to do something like what logmein does so i can keep moving around my Webserver to new IP but it has some local client or something which keeps updating my IP or something i am sure i would need a gateway server somewhere which is connected to my domain name via DNS so somebody accessing my website www.mywebsite.com goes to my main server then gets routed to my laptop which could be anywhere but my gateway server is able to communicate to my webserver I will keep updating the case description based on comments to make more sense. please have patience with me. Regards

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >