Search Results

Search found 59554 results on 2383 pages for 'distributed data mining'.

Page 97/2383 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • SQL SERVER – Quiz and Video – Introduction to Discovering XML Data Type Methods

    - by pinaldave
    This blog post is inspired from SQL Interoperability Joes 2 Pros: A Guide to Integrating SQL Server with XML, C#, and PowerShell – SQL Exam Prep Series 70-433 – Volume 5. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Discovering XML Data Type Methods – A Primer. In the article we discussed various basics terminology of the XML. The article further covers following important concepts of XML. What are XML Data Type Methods The query() Method The value() Method The exist() Method The modify() Method Above five are the most important concepts related to XML and SQL Server. There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1.) Which method returns an XML fragment from the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 2.) Which XML data type method returns a “1” if found and “0” if the specified XPath is not found in the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 3.) Which XML data type method allows you to pick the data type of the value that is returned from the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 4.) Which method will not work with a SQL SELECT statement? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 1 2) 3 3) 2 4) 4 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Clustering filesystem for small files

    - by viraptor
    Hi, I'm looking for a distributed filesystem which I could use for storing lots of small files (<1MB usually). What I want to get is: 2 servers which have the fs mounted themselves and mirror the data locking support (among reachable nodes) some kind of best-effort automatic resynchronisation after one node goes down and comes back again What I mean by the resync is that, I'm ok with both servers doing read/write operations even if they split-brain. I'm also ok if a local process obtains a lock if the other host is not reachable. From the resync I expect only a file-level consistent view after a while - that is - if file x is modified on both nodes during a split-brain, I don't really care which one is available after they join again, as long as it's full file, not one block coming from node1 and another block from node2. Is there a solution like that out there? I see that gluster has some problems with file locks (even in 3.1). I also noticed that OCFS2 will panic if both nodes split-brain. What other filesystem would allow me to do what I want?

    Read the article

  • Distributing processing for an application that wasn't designed with that in mind

    - by Tim
    We've got the application at work that just sits and does a whole bunch of iterative processing on some data files to perform some simulations. This is done by an "old" Win32 application that isn't multi-processor aware, so new(ish) computers and workstations are mostly sitting idle running this application. However, since it's installed by a typical Windows Install Shield installer, I can't seem to install and run multiple copies of the application. The work can be split up manually before processing, enabling the work to be distributed across multiple machines, but we still can't take advantage of multiple core CPUs. The results can be joined back together after processing to make a complete simulation. Is there a product out there that would let me "compartmentalize" an installation (or 4) so I can take advantage of a multi-core CPU? I had thought of using MS Softgrid, but I believe that still depends on a remote server to do the heavy lifting (though please correct me if I'm wrong). Furthermore, is there a way I can distribute the workload off the one machine? So an input could be split into 50 chunks, handed out to 50 machines, and worked on? All without really changing the initial application? In a perfect world, I'd get the application to take advantage of a DesktopGrid (BOINC), but like most "mission critical corporate applications", the need is there, but the money is not. Thank you in advance (and sorry if this isn't appropriate for serverfault).

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • How to distribute multiple executions of an app across many machines

    - by Salec
    I've got a simulation app (64-bit windows) that runs without any user interaction. This app gathers information and pushes it to a remote MS SQL Server. What I'd like to do is execute this simulation as many times as I can on multiple machines after our nightly build has finished and it has passed the test suite. If possible I'd love to have the ability to configure it to stop after x total runs or if the entire batch has taken over y hours. I've tried using Visual Studio's built in test framework since we already have a test lab set up with multiple agents. I created a single unit test that simply runs the simulation then I created an ordered test and added that single test multiple times (from what I gather, this is the only way to execute the same unit test more than once). I found that ordered tests are only run on a single agent and not distributed which is very limiting. We use TeamCity to perform our nightly builds and I suspect it's possible to implement this on top of that, but I'm fairly new to TeamCity. We also have Jenkins and Bamboo available and I'm open to any other software that would get the job done presuming it runs on a 64-bit Windows OS. Any suggestions?

    Read the article

  • Import Data from Excel sheet to DB Table through OAF page

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarImportxlsDemo   Automatically a new OA Project will also be created   Project Name -- ImportxlsDemo Default Package -- prajkumar.oracle.apps.fnd.importxlsdemo   2. Add JAR file jxl-2.6.3.jar to Apache Library Download jxl-2.6.3.jar from following link – http://www.findjar.com/jar/net.sourceforge.jexcelapi/jars/jxl-2.6.jar.html   Steps to add jxl.jar file in Local Machine Right Click on ImportxlsDemo > Project Properties > Libraries > Add jar/Directory and browse to directory where jxl-2.6.3.jar has been downloaded and select the JAR file            Steps to add jxl.jar file at EBS middle tier On your EBS middile tier copy jxl.jar at $FND_TOP/java/3rdparty/standalone Add $FND_TOP/java/3rdparty/standalone\jxl.jar to custom classpath in Jser.properties file which is at $IAS_ORACLE_HOME/Apache/Jserv/etc wrapper.classpath=/U01/oracle/dev/devappl/fnd/11.5.0/java/3rdparty/stdalone/jxl.jar Bounce Apache Server   3. Create a New Application Module (AM) Right Click on ImportxlsDemo > New > ADF Business Components > Application Module Name -- ImportxlsAM Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   Check Application Module Class: ImportxlsAMImpl Generate JavaFile(s)   4. Create Test Table in which we will insert data from excel CREATE TABLE xx_import_excel_data_demo (    -- --------------------      -- Data Columns      -- --------------------      column1                 VARCHAR2(100),      column2                 VARCHAR2(100),      column3                 VARCHAR2(100),      column4                 VARCHAR2(100),      column5                 VARCHAR2(100),      -- --------------------      -- Who Columns      -- --------------------      last_update_date   DATE         NOT NULL,      last_updated_by    NUMBER   NOT NULL,      creation_date         DATE         NOT NULL,      created_by             NUMBER    NOT NULL,      last_update_login  NUMBER );   5. Create a New Entity Object (EO) Right click on ImportxlsDemo > New > ADF Business Components > Entity Object Name – ImportxlsEO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.schema.server Database Objects -- XX_IMPORT_EXCEL_DATA_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on ImportxlsDemo > New > ADF Business Components > View Object Name -- ImportxlsVO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   In Step2 in Entity Page select ImportxlsEO and shuttle it to selected list In Step3 in Attributes Window select all columns and shuttle them to selected list   In Java page Uncheck Generate Java file for View Object Class: ImportxlsVOImpl Select Generate Java File for View Row Class: ImportxlsVORowImpl -> Generate Java File -> Accessors   7. Add Your View Object to Root UI Application Module Right click on ImportxlsAM > Edit ImportxlsAM > Data Model > Select ImportxlsVO and shuttle to Data Model list   8. Create a New Page Right click on ImportxlsDemo > New > Web Tier > OA Components > Page Name -- ImportxlsPG Package -- prajkumar.oracle.apps.fnd.importxlsdemo.webui   9. Select the ImportxlsPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties:   Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.importxlsdemo.server.ImportxlsAM Window Title Import Data From Excel through OAF Page Demo Window Title Import Data From Excel through OAF Page Demo   11. Create messageComponentLayout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN Item Style messageComponentLayout   12. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   13. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Go Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   14. Create Controller for page ImportxlsPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.importxlsdemo.webui Class Name: ImportxlsCO   Write Following Code in ImportxlsCO in processFormRequest import oracle.apps.fnd.framework.OAApplicationModule; import oracle.apps.fnd.framework.OAException; import java.io.Serializable; import oracle.apps.fnd.framework.webui.OAControllerImpl; import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.cabo.ui.data.DataObject; import oracle.jbo.domain.BlobDomain; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);  if (pageContext.getParameter("Go") != null)  {   DataObject fileUploadData = (DataObject)pageContext.getNamedDataObject("MessageFileUpload");   String fileName = null;                 try   {    fileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   }   catch(NullPointerException ex)   {    throw new OAException("Please Select a File to Upload", OAException.ERROR);   }   BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, fileName);   try   {    OAApplicationModule oaapplicationmodule = pageContext.getRootApplicationModule();    Serializable aserializable2[] = {uploadedByteStream};    Class aclass2[] = {BlobDomain.class };    oaapplicationmodule.invokeMethod("ReadExcel", aserializable2,aclass2);   }   catch (Exception ex)   {    throw new OAException(ex.toString(), OAException.ERROR);   }  } }     Write Following Code in ImportxlsAMImpl.java import java.io.IOException; import java.io.InputStream; import jxl.Cell; import jxl.CellType; import jxl.Sheet; import jxl.Workbook; import jxl.read.biff.BiffException; import oracle.apps.fnd.framework.server.OAApplicationModuleImpl; import oracle.jbo.Row; import oracle.apps.fnd.framework.OAViewObject; import oracle.apps.fnd.framework.server.OAViewObjectImpl; import oracle.jbo.domain.BlobDomain; public void createRecord(String[] excel_data) {   OAViewObject vo = (OAViewObject)getImportxlsVO1();            if (!vo.isPreparedForExecution())    {   vo.executeQuery();      }                      Row row = vo.createRow();  try  {   for (int i=0; i < excel_data.length; i++)   {    row.setAttribute("Column" +(i+1) ,excel_data[i]);   }  }  catch(Exception e)  {   System.out.println(e.getMessage());   }  vo.insertRow(row);  getTransaction().commit(); }      public void ReadExcel(BlobDomain fileData) throws IOException {  String[] excel_data  = new String[5];  InputStream inputWorkbook = fileData.getInputStream();  Workbook w;          try  {   w = Workbook.getWorkbook(inputWorkbook);                       // Get the first sheet   Sheet sheet = w.getSheet(0);                       for (int i = 0; i < sheet.getRows(); i++)   {    for (int j = 0; j < sheet.getColumns(); j++)    {     Cell cell = sheet.getCell(j, i);     CellType type = cell.getType();     if (cell.getType() == CellType.LABEL)     {      System.out.println("I got a label " + cell.getContents());      excel_data[j] = cell.getContents();     }     if (cell.getType() == CellType.NUMBER)     {        System.out.println("I got a number " + cell.getContents());      excel_data[j] = cell.getContents();     }    }    createRecord(excel_data);   }  }              catch (BiffException e)  {   e.printStackTrace();  } }   15. Congratulation you have successfully finished. Run Your page and Test Your Work   Consider Excel PRAJ_TEST.xls with following data --       Lets Try to import this data into DB Table --          

    Read the article

  • Files missing after copying from one sdcard to another using Ubuntu 12.04

    - by Abhi Kandoi
    I have two Kingston sdcards 8GB and 32GB. The 8GB for HTC Sensation and 32GB for my Tablet (Chinese). I soon ran out of space on my phone and decided to swap the data between the sdcards. I copied the data of 32GB sdcard to my PC(with Ubuntu 12.04) deleted everything on the 32GB sdcard and copied data from the 8GB sdcard to the other sdcard. At last I copied the data from my PC to the 8GB sdcard(before this I deleted everything that was on it before). Now the problem is that the 8GB sdcard has the data exactly the same, but the 32GB sdcard has some missing data. All my photos(not copied or posted on Facebook) and Songs are lost. Please help as to what I can do to get my data back? I have Android Revolution HD (ICS 4.0.3) on my rooted HTC Sensation (India). I suspect it has something to do with my data loss.

    Read the article

  • Custom SNMP Cacti Data Source fails to update

    - by Andrew Wilkinson
    I'm trying to create a custom SNMP datasource for Cacti but despite everything I can check being correct, it is not creating the rrd file, or updating it even when I create it. Other, standard SNMP sources are working correctly so it's not SNMP or permissions that are the problem. I've created a new Data Query, which when I click on "Verbose Query" on the device screen returns the following: + Running data query [10]. + Found type = '3' [SNMP Query]. + Found data query XML file at '/volume1/web/cacti/resource/snmp_queries/syno_volume_stats.xml' + XML file parsed ok. + missing in XML file, 'Index Count Changed' emulated by counting oid_index entries + Executing SNMP walk for list of indexes @ '.1.3.6.1.2.1.25.2.3.1.3' Index Count: 8 + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' value: 'Physical memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' value: 'Virtual memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' value: 'Memory buffers' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' value: 'Cached memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' value: 'Swap space' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' value: '/' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' value: '/volume1' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' value: '/opt' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' results: '1' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' results: '3' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' results: '6' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' results: '7' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' results: '10' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' results: '31' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' results: '32' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' results: '33' + Located input field 'index' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.3' + Found item [index='Physical memory'] index: 1 [from value] + Found item [index='Virtual memory'] index: 3 [from value] + Found item [index='Memory buffers'] index: 6 [from value] + Found item [index='Cached memory'] index: 7 [from value] + Found item [index='Swap space'] index: 10 [from value] + Found item [index='/'] index: 31 [from value] + Found item [index='/volume1'] index: 32 [from value] + Found item [index='/opt'] index: 33 [from value] + Located input field 'volsizeunit' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.4' + Found item [volsizeunit='1024 Bytes'] index: 1 [from value] + Found item [volsizeunit='1024 Bytes'] index: 3 [from value] + Found item [volsizeunit='1024 Bytes'] index: 6 [from value] + Found item [volsizeunit='1024 Bytes'] index: 7 [from value] + Found item [volsizeunit='1024 Bytes'] index: 10 [from value] + Found item [volsizeunit='4096 Bytes'] index: 31 [from value] + Found item [volsizeunit='4096 Bytes'] index: 32 [from value] + Found item [volsizeunit='4096 Bytes'] index: 33 [from value] + Located input field 'volsize' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.5' + Found item [volsize='1034712'] index: 1 [from value] + Found item [volsize='3131792'] index: 3 [from value] + Found item [volsize='1034712'] index: 6 [from value] + Found item [volsize='775904'] index: 7 [from value] + Found item [volsize='2097080'] index: 10 [from value] + Found item [volsize='612766'] index: 31 [from value] + Found item [volsize='1439812394'] index: 32 [from value] + Found item [volsize='1439812394'] index: 33 [from value] + Located input field 'volused' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.6' + Found item [volused='1022520'] index: 1 [from value] + Found item [volused='1024096'] index: 3 [from value] + Found item [volused='32408'] index: 6 [from value] + Found item [volused='775904'] index: 7 [from value] + Found item [volused='1576'] index: 10 [from value] + Found item [volused='148070'] index: 31 [from value] + Found item [volused='682377865'] index: 32 [from value] + Found item [volused='682377865'] index: 33 [from value] AS you can see it appears to be returning the correct data. I've also set up data templates and graph templates to display the data. The create graphs for a device screen shows the correct data, and when selecting one row can clicking create a new data source and graph are created. Unfortunately the data source is never updated. Increasing the poller log level shows that it appears to not even be querying the data source, despite it being used? What should my next steps to debug this issue be?

    Read the article

  • Remote Desktop failed logon event 4625 not logging correctly on 2008 Terminal Services server

    - by Zone12
    When I use the new remote desktop with ssl and try to log on with bad credentials it logs a 4625 event as expected. The problem is, it doesn't log the ip address, so I can't block malicious logons in our firewall. The event looks like this: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{00000000-0000-0000-0000-000000000000}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2012-04-13T06:52:36.499113600Z" /> <EventRecordID>467553</EventRecordID> <Correlation /> <Execution ProcessID="544" ThreadID="596" /> <Channel>Security</Channel> <Computer>ontheinternet</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">notauser</Data> <Data Name="TargetDomainName">MYSERVER-PC</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc0000064</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">NtLmSsp</Data> <Data Name="AuthenticationPackageName">NTLM</Data> <Data Name="WorkstationName">MYSERVER-PC</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">-</Data> <Data Name="IpPort">-</Data> </EventData> </Event> It seems because the logon type is 3 and not 10 like the old rdp sessions, the ip address and other information is not stored. The machine I am trying to connect from is on the internet and not on the same network as the server. Does anyone know where this information is stored (and what other events are generated with a failed logon)? Any help will be much appreciated.

    Read the article

  • Windows 7 explorer always crashes, opens small "Personalized Settings" window

    - by Ian Sellar
    My Windows 7 desktop PC, built by me, started acting very weird in the last couple of days. I use it quite often, about half of the time through TeamViewer. Explorer would crash and restart randomly, almost always through TeamViewer. This made me suspect that TeamViewer was the problem but I have reproduced it with and without TeamViewer several times. The only way I can seem to get the problem not to occur is by booting into Safe Mode. I have used CCleaner and Malwarebytes to make sure it wasn't a registry error or malware causing the problem, and I have tried the fix in the seemly related issue here as well every other fix I have found online including removing security updates KB980408 and KB2926765 as well as using "sfc /scannow" and a bunch of other things I can't remember. More recently when I try to start explorer it is popping up a small window that says "Personalized Settings" on the top, but is completely empty and crashes instantly. The only way I can get it to disappear is to kill the explorer.exe process. I wish I could take a screenshot but I can't seem to open paint or even find the exe. I have tried restarting it, I have tried starting it while the personalized settings window was open. I have come up with two lists of processes the first is the list of active processes when I boot into safe mode and explorer seems to work fine. The second is the list of processes that I can narrow it down to in normal boot and still replicate the problem. There is one process that I can't seem to close. NisSrv.exe which is describes as "Microsoft Network Realtime Inspection Service". When I try to close the process NisSrv.exe it says "The operation could not be completed. Access is denied." When I try to close the related service it gives the same message. Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 2,660 K smss.exe 304 Services 0 1,196 K csrss.exe 408 Services 0 4,156 K wininit.exe 444 Services 0 4,608 K csrss.exe 452 Console 1 8,700 K services.exe 492 Services 0 7,700 K winlogon.exe 524 Console 1 5,756 K lsass.exe 536 Services 0 10,644 K lsm.exe 544 Services 0 4,316 K svchost.exe 652 Services 0 8,976 K MsMpEng.exe 804 Services 0 40,696 K explorer.exe 1332 Console 1 85,220 K ctfmon.exe 1376 Console 1 3,680 K dllhost.exe 1624 Console 1 8,656 K chrome.exe 1408 Console 1 98,504 K WmiPrvSE.exe 2352 Services 0 6,472 K chrome.exe 1744 Console 1 65,116 K taskmgr.exe 372 Console 1 14,948 K cmd.exe 2776 Console 1 2,960 K conhost.exe 1816 Console 1 3,580 K tasklist.exe 2308 Console 1 5,868 K And the list of processes I have narrowed it down to. Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 2,808 K smss.exe 316 Services 0 1,216 K csrss.exe 484 Services 0 4,532 K wininit.exe 596 Services 0 4,604 K csrss.exe 604 Console 1 23,676 K services.exe 652 Services 0 11,344 K lsass.exe 668 Services 0 12,692 K lsm.exe 676 Services 0 4,464 K MsMpEng.exe 972 Services 0 68,436 K winlogon.exe 168 Console 1 7,784 K svchost.exe 496 Services 0 19,140 K NisSrv.exe 3176 Services 0 808 K svchost.exe 1684 Services 0 11,260 K taskmgr.exe 4524 Console 1 20,696 K cmd.exe 4764 Console 1 7,224 K conhost.exe 4772 Console 1 6,916 K sublime_text.exe 2340 Console 1 45,012 K dllhost.exe 4476 Console 1 8,736 K tasklist.exe 3796 Console 1 5,768 K WmiPrvSE.exe 1768 Services 0 6,344 K Here is the event data xml from event viewer for the error I am getting. <EventData> <Data>explorer.exe</Data> <Data>6.1.7601.17567</Data> <Data>4d672ee4</Data> <Data>vrfcore.dll</Data> <Data>6.3.9600.16384</Data> <Data>5215f8f5</Data> <Data>80000003</Data> <Data>0000000000003a00</Data> <Data>12e4</Data> <Data>01cfb84fa70f89dc</Data> <Data>C:\Windows\system32\explorer.exe</Data> <Data>C:\Windows\SYSTEM32\vrfcore.dll</Data> <Data>e5957093-2442-11e4-9f8a-94de806ed9cb</Data> </EventData> I was looking through the eventvwr log again and I found this, possibly related <EventData> <Data>runonce.exe</Data> <Data>6.1.7601.17514</Data> <Data>4ce7a253</Data> <Data>MSVCR100.dll</Data> <Data>10.0.40219.325</Data> <Data>4df2bcac</Data> <Data>c0000005</Data> <Data>000000000003c145</Data> <Data>670</Data> <Data>01cfb8dabbd85942</Data> <Data>C:\Windows\system32\runonce.exe</Data> <Data>C:\Windows\system32\MSVCR100.dll</Data> <Data>fa6f82b9-24cd-11e4-80a8-94de806ed9cb</Data> </EventData> And the general error details Faulting application name: Explorer.EXE, version: 6.1.7601.17567, time stamp: 0x4d672ee4 Faulting module name: vrfcore.dll, version: 6.3.9600.16384, time stamp: 0x5215f8f5 Exception code: 0x80000003 Fault offset: 0x0000000000003a00 Faulting process id: 0xc38 Faulting application start time: 0x01cfb84e5e852c5f Faulting application path: C:\Windows\Explorer.EXE Faulting module path: C:\Windows\SYSTEM32\vrfcore.dll Report Id: 9dc19e6d-2441-11e4-9f8a-94de806ed9cb Another probably unrelated error that I seem to be getting pretty often. Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage > 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. My explorer tab in Autoruns seen below along with the error when I try to uncheck something. I should add that I seem to be able to disable shell extensions with ShellExView but I still can't get explorer to start correctly. EXPLORER SHELL UPDATE - See screenshot below I can access the explorer right click menu through a file manager I downloaded called NexusFile, but still no luck starting explorer. Another round of errors that I am getting regarding Windows Search Service The search service has detected corrupted data files in the index {id=4700}. The service will attempt to automatically correct this problem by rebuilding the index. Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) followed by The Windows Search Service is being stopped because there is a problem with the indexer: The catalog is corrupt. Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801 and The plug-in in <Search.JetPropStore> cannot be initialized. Context: Windows Application, SystemIndex Catalog Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) and The gatherer object cannot be initialized. Context: Windows Application, SystemIndex Catalog Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) and The Windows Search Service cannot load the property store information. Context: Windows Application, SystemIndex Catalog Details: The content index database is corrupt. (HRESULT : 0xc0041800) (0xc0041800) WER Log http://pastebin.com/WXKGDT4Q I'll add information as I remember it or people request it.

    Read the article

  • Backup data rate on Raspberry Pi maxing out at 5 Mb/s. Why?

    - by bastibe
    I set up my Raspberry Pi as a Time Machine, as documented here. At the moment, the Raspberry Pi is connected to my MacBook Pro using a direct Ethernet cable. Also, an external hard drive (laptop drive) is connected to the Raspberry Pi using the USB port. However, backups are pretty slow. Activity Monitor claims that the Network is transferring a very steady 5 Mb/s, where my Time Capsule is transferring up to 8 Mb/s with a lot of fluctuation. The Raspberry Pi self-reports (top) that its CPU is only half-used, with about equal parts afpd, usb-storage and jbd2/sda1-8. Thus, I think that the processing power of the Raspberry Pi does not seem to be the problem here. To me, this looks like there is some kind of bottleneck that maxes out at 5 Mb/s thus potentially having my backups run at less than their potential speed. To the best of my knowledge, this might be the afp-daemon, the usb-bus or the external hard drive. So, my question is, how could I identify the true culprit and what can I do about it?

    Read the article

  • Excel: What formula combines this data into one COUNT amount?

    - by Mike
    I have 30 colleagues who are answering questions over 3 time periods. Each has their own Excel workbook with the questions, and over the year they update it. I collate their worksheets into one master worksheet, but now need to combine their answers into a simple table. The questions, the time periods and then a COUNT of how many answered it. For example: I need a table that shows me how many people (not the persons name at this point) answered question 10 in time period 2. I can't use a database before someone mentions it ;). Many thanks Mike.

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • S#harp architecture mapping many to many and ado.net data services: A single resource was expected f

    - by Leg10n
    Hi, I'm developing an application that reads data from a SQL server database (migrated from a legacy DB) with nHibernate and s#arp architecture through ADO.NET Data services. I'm trying to map a many-to-many relationship. I have a Error class: public class Error { public virtual int ERROR_ID { get; set; } public virtual string ERROR_CODE { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<ErrorGroup> GROUPS { get; protected set; } } And then I have the error group class: public class ErrorGroup { public virtual int ERROR_GROUP_ID {get; set;} public virtual string ERROR_GROUP_NAME { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<Error> ERRORS { get; protected set; } } And the overrides: public class ErrorGroupOverride : IAutoMappingOverride<ErrorGroup> { public void Override(AutoMapping<ErrorGroup> mapping) { mapping.Table("ERROR_GROUP"); mapping.Id(x => x.ERROR_GROUP_ID, "ERROR_GROUP_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<Error>(x => x.Error) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_GROUP_ID") .ChildKeyColumn("ERROR_ID").Inverse().AsBag(); } } public class ErrorOverride : IAutoMappingOverride<Error> { public void Override(AutoMapping<Error> mapping) { mapping.Table("ERROR"); mapping.Id(x => x.ERROR_ID, "ERROR_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<ErrorGroup>(x => x.GROUPS) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_ID") .ChildKeyColumn("ERROR_GROUP_ID").AsBag(); } } When I view the Data service in the browser like: http://localhost:1905/DataService.svc/Errors it shows the list of errors with no problems, and using it like http://localhost:1905/DataService.svc/Errors(123) works too. The Problem When I want to see the Errors in a group or the groups form an error, like: "http://localhost:1905/DataService.svc/Errors(123)?$expand=GROUPS" I get the XML Document, but the browser says: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- Only one top level element is allowed in an XML document. Error processing resource 'http://localhost:1905/DataServic... <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> -^ I view the sourcecode, and I get the data. However it comes with an exception: <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code></code> <message xml:lang="en-US">An error occurred while processing this request.</message> <innererror xmlns="xmlns"> <message>A single resource was expected for the result, but multiple resources were found.</message> <type>System.InvalidOperationException</type> <stacktrace> at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved)&#xD; at System.Data.Services.ResponseBodyWriter.Write(Stream stream)</stacktrace> </innererror> </error> A I missing something??? Where does this error come from?

    Read the article

  • Data Warehouse ETL slow - change primary key in dimension?

    - by Jubbles
    I have a working MySQL data warehouse that is organized as a star schema and I am using Talend Open Studio for Data Integration 5.1 to create the ETL process. I would like this process to run once per day. I have estimated that one of the dimension tables (dimUser) will have approximately 2 million records and 23 columns. I created a small test ETL process in Talend that worked, but given the amount of data that may need to be updated daily, the current performance will not cut it. It takes the ETL process four minutes to UPDATE or INSERT 100 records to dimUser. If I assumed a linear relationship between the count of records and the amount of time to UPDATE or INSERT, then there is no way the ETL can finish in 3-4 hours (my hope), let alone one day. Since I'm unfamiliar with Java, I wrote the ETL as a Python script and ran into the same problem. Although, I did discover that if I did only INSERT, the process went much faster. I am pretty sure that the bottleneck is caused by the UPDATE statements. The primary key in dimUser is an auto-increment integer. My friend suggested that I scrap this primary key and replace it with a multi-field primary key (in my case, 2-3 fields). Before I rip the test data out of my warehouse and change the schema, can anyone provide suggestions or guidelines related to the design of the data warehouse the ETL process how realistic it is to have an ETL process INSERT or UPDATE a few million records each day will my friend's suggestion significantly help If you need any further information, just let me know and I'll post it. UPDATE - additional information: mysql> describe dimUser; Field Type Null Key Default Extra user_key int(10) unsigned NO PRI NULL auto_increment id_A int(10) unsigned NO NULL id_B int(10) unsigned NO NULL field_4 tinyint(4) unsigned NO 0 field_5 varchar(50) YES NULL city varchar(50) YES NULL state varchar(2) YES NULL country varchar(50) YES NULL zip_code varchar(10) NO 99999 field_10 tinyint(1) NO 0 field_11 tinyint(1) NO 0 field_12 tinyint(1) NO 0 field_13 tinyint(1) NO 1 field_14 tinyint(1) NO 0 field_15 tinyint(1) NO 0 field_16 tinyint(1) NO 0 field_17 tinyint(1) NO 1 field_18 tinyint(1) NO 0 field_19 tinyint(1) NO 0 field_20 tinyint(1) NO 0 create_date datetime NO 2012-01-01 00:00:00 last_update datetime NO 2012-01-01 00:00:00 run_id int(10) unsigned NO 999 I used a surrogate key because I had read that it was good practice. Since, from a business perspective, I want to keep aware of potential fraudulent activity (say for 200 days a user is associated with state X and then the next day they are associated with state Y - they could have moved or their account could have been compromised), so that is why geographic data is kept. The field id_B may have a few distinct values of id_A associated with it, but I am interested in knowing distinct (id_A, id_B) tuples. In the context of this information, my friend suggested that something like (id_A, id_B, zip_code) be the primary key. For the large majority of daily ETL processes (80%), I only expect the following fields to be updated for existing records: field_10 - field_14, last_update, and run_id (this field is a foreign key to my etlLog table and is used for ETL auditing purposes).

    Read the article

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

  • Accessing Server-Side Data from Client Script: Accessing JSON Data From an ASP.NET Page Using jQuery

    When building a web application, we must decide how and when the browser will communicate with the web server. The ASP.NET WebForms model greatly simplifies web development by providing a straightforward mechanism for exchanging data between the browser and the server. With WebForms, each ASP.NET page's rendered output includes a <form> element that performs a postback to the same page whenever a Button control within the form is clicked, or whenever the user modifies a control whose AutoPostBack property is set to True. On postback, the server sends the entire contents of the web page back to the browser, which then displays this new content. With WebForms we don't need to spend much time or effort thinking about how or when the browser will communicate with the server or how that returned information will be processed by the browser. It just works. While this approach certainly works and has its advantages, it's not without its drawbacks. The primary concern with postback forms is that they require a large amount of information to be exchanged between the browser and the server. Specifically, the browser sends back all of its form fields (including hidden ones, like view state, which may be quite large) and then the server sends back the entire contents of the web page. Granted, there are scenarios where this large quantity of data needs to be exchanged, but in many cases we can use techniques that exchange much less information. However, these techniques necessitate spending more time and effort thinking about how and when to have the browser communicate with the server and intelligently deciding on what information needs to be exchanged. This article, the first in a multi-part series, examines different techniques for accessing server-side data from a browser using client-side script. Throughout this series we will explore alternative ways to expose data on the server so that it can be accessed from the browser using script; we will also examine various tools for communicating with the server from JavaScript, including jQuery and the ASP.NET AJAX library. Read on to learn more! Read More >

    Read the article

  • Do you need all that data?

    - by BuckWoody
    I read an amazing post over on ars technica (link: http://arstechnica.com/science/news/2010/03/the-software-brains-behind-the-particle-colliders.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss) abvout the LHC, or as they are also known, the "particle colliders". Beyond just the pure scientific geek awesomeness, these instruments have the potential to collect more data than you can (or possibly should) store. Actually, this problem has a lot in common with a BI system. There's so much granular detail available in the source systems that a designer has to decide how, and how much, to roll up the data. Whenver you do that, you lose fidelity, but in many cases that's OK. Take, for example, your car's speedometer. You don't actually need to track each and every point of speed as it happens. You only need to know that you're hovering around the speed limit at a certain point in time. Since this is the way that humans percieve data, is there some lesson we should take in the design of data "flows" - and what implications does this have for new technologies like StreamInsight? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Standards Support, Protocol, Data Portability – 3 Important SQL Server Documentations for Downloads

    - by pinaldave
    I have been working with SQL Server for more than 8 years now continuously and I like to read a lot. Some time I read easy things and sometime I read stuff which are not so easy.  Here are few recently released article which I referred and read. They are not easy read but indeed very important read if you are the one who like to read things which are more advanced. SQL Server Standards Support Documentation The SQL Server standards support documentation provides detailed support information for certain standards that are implemented in Microsoft SQL Server. Microsoft SQL Server Protocol Documentation The Microsoft SQL Server protocol documentation provides technical specifications for Microsoft proprietary protocols that are implemented and used in Microsoft SQL Server 2008. Microsoft SQL Server Data Portability Documentation The SQL Server data portability documentation explains various mechanisms by which user-created data in SQL Server can be extracted for use in other software products. These mechanisms include import/export functionality, documented APIs, industry standard formats, or documented data structures/file formats. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >