Search Results

Search found 71553 results on 2863 pages for 'form data'.

Page 127/2863 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Infopath data submit into sharepoint form library

    - by yreddy
    Hi getting the following error message while submitting InfoPath form to Sharepoint forum Library. Please give step by step answer how to resove the bellow issue. **InfoPath cannot submit the form. An error occurred while the form was being submitted. The form cannot be submitted to the following location: The folder does not exist****** Thanks

    Read the article

  • Prevent form to repost on refresh

    - by meo
    Given is a form that is followed by a confirmation page where you have to confirm you entered data of previous form. Now if the user refreshes this recapitulation page, the data form the previous form reposts. Is there a way to prevent that in JavaScript? I know its not a ideal solution, it should happen on the server side. But i have no choice in this case.

    Read the article

  • How to Generate the .Form file using .java file in swings

    - by vamshikpd
    Hi all, I am using SunJavaStudio Enterprise 8 and Swing frame work In swing, In My source folder, I have Demo.java and Demo.form file. Demo.form file is the Design file. If both files in the Source folder,java file shows source and design view in IDE. If I delete the Demo.form in source folder,The Design view doesn't show. I have only Demo.java file,How can i generate the Demo.form ? please can u help me urgent

    Read the article

  • reference a form inside a selector in jquery

    - by ooo
    if i have this code in jquery: var parentDiv = $(this).parent('.copyInstance'); inside this div there is a form. how do i reference a form element inside this selector? i want something like this $.post($(parentDiv + " Form").attr('action'), $(parentDiv + " Form").serialize(), function(data) { }); but this above doesn't seem to work

    Read the article

  • How to handle form data in cakephp

    - by Vinay
    i have a form for adding people object ,i want to add as many people without saving people in database,after clicking add people form submit button,the same form should appear without saving the form data to database,instead it should save to the session.please explain in detail if possible with the help of a example to do this, i have not found any tutorial which explains cakephp session in detail.

    Read the article

  • how to return url in form file in strut

    - by Najmi
    hai all, i have one problem regarding to return the form file value is validation fail. i have one form that have a few field to be fullfil by the user and one attachment field.if one of the field is blank the system will give the error when submit the form..my problem is, all the field will have the value that we entered before this but for form file it disappear.

    Read the article

  • How to catch key press on a form c# .net

    - by flavour404
    Hi, I have a parent form that contains a lot of controls. What I am trying to do is filter all of the key presses for that form. The trouble is that if the focus is on one of the controls on the form then the parent form is not getting the key press event, so how do I capture the key down event? Thanks, R.

    Read the article

  • sending full form data to back end using jquery

    - by pradeep
    i have a form that is dynamically built. so the number of form elements arr not fixed. I wnat to send whole of $_POSt data from the form using jquery to back end for processing .i cant use jquery form plugin as the jquery version i am using is old. any other way ? i tied to do like this $.post('all_include_files/update_save.php',{variable:"<?php echo json_encode($_POST) ?>"},function(data) { alert(data); }) but did not work

    Read the article

  • Problem posting multipart form data using Apache with mod_proxy to a mongrel instance

    - by Ryan E
    I am attempting to simulate my site's production environment as closely as I can on my local machine. This is a rails site that uses Apache w/ mod_proxy to forward requests to a mongrel cluster. On my Mac OSX Leopard machine, I have the default install of apache running and have configured a vhost to use mod_proxy to to forward requests to a local running mongrel instance on port 3000. <Proxy balancer://mongrel_cluster-development> BalancerMember http://127.0.0.1:3000 </Proxy> For the most part, this is working fine. I can browse my development site using the ServerName of the vhost I configured and can confirm that requests are being properly forwarded to the mongrel instance. However, there is a page on the site that has a multipart form that is used to upload an image to the server. When I post this form, there is a delay of about 5 minutes and the browser ultimately returns a Bad Request Your browser sent a request that this server could not understand. In the error log for my vhost: [Tue Sep 22 09:47:57 2009] [error] (70007)The timeout specified has expired: proxy: prefetch request body failed to 127.0.0.1:3000 (127.0.0.1) from ::1 () This same form works fine if I browse directly to the mongrel instance (http://127.0.0.1:3000). Anybody have any idea what the problem might be and how to fix it? If there is any important information that I neglected to include, post a comment, and I can add to this question. Note: Upon further investigation, this appears to be a problem specific to Safari. The form works fine in Firefox.

    Read the article

  • Files missing after copying from one sdcard to another using Ubuntu 12.04

    - by Abhi Kandoi
    I have two Kingston sdcards 8GB and 32GB. The 8GB for HTC Sensation and 32GB for my Tablet (Chinese). I soon ran out of space on my phone and decided to swap the data between the sdcards. I copied the data of 32GB sdcard to my PC(with Ubuntu 12.04) deleted everything on the 32GB sdcard and copied data from the 8GB sdcard to the other sdcard. At last I copied the data from my PC to the 8GB sdcard(before this I deleted everything that was on it before). Now the problem is that the 8GB sdcard has the data exactly the same, but the 32GB sdcard has some missing data. All my photos(not copied or posted on Facebook) and Songs are lost. Please help as to what I can do to get my data back? I have Android Revolution HD (ICS 4.0.3) on my rooted HTC Sensation (India). I suspect it has something to do with my data loss.

    Read the article

  • Custom SNMP Cacti Data Source fails to update

    - by Andrew Wilkinson
    I'm trying to create a custom SNMP datasource for Cacti but despite everything I can check being correct, it is not creating the rrd file, or updating it even when I create it. Other, standard SNMP sources are working correctly so it's not SNMP or permissions that are the problem. I've created a new Data Query, which when I click on "Verbose Query" on the device screen returns the following: + Running data query [10]. + Found type = '3' [SNMP Query]. + Found data query XML file at '/volume1/web/cacti/resource/snmp_queries/syno_volume_stats.xml' + XML file parsed ok. + missing in XML file, 'Index Count Changed' emulated by counting oid_index entries + Executing SNMP walk for list of indexes @ '.1.3.6.1.2.1.25.2.3.1.3' Index Count: 8 + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' value: 'Physical memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' value: 'Virtual memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' value: 'Memory buffers' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' value: 'Cached memory' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' value: 'Swap space' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' value: '/' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' value: '/volume1' + Index found at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' value: '/opt' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.1' results: '1' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.3' results: '3' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.6' results: '6' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.7' results: '7' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.10' results: '10' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.31' results: '31' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.32' results: '32' + index_parse at OID: '.1.3.6.1.2.1.25.2.3.1.3.33' results: '33' + Located input field 'index' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.3' + Found item [index='Physical memory'] index: 1 [from value] + Found item [index='Virtual memory'] index: 3 [from value] + Found item [index='Memory buffers'] index: 6 [from value] + Found item [index='Cached memory'] index: 7 [from value] + Found item [index='Swap space'] index: 10 [from value] + Found item [index='/'] index: 31 [from value] + Found item [index='/volume1'] index: 32 [from value] + Found item [index='/opt'] index: 33 [from value] + Located input field 'volsizeunit' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.4' + Found item [volsizeunit='1024 Bytes'] index: 1 [from value] + Found item [volsizeunit='1024 Bytes'] index: 3 [from value] + Found item [volsizeunit='1024 Bytes'] index: 6 [from value] + Found item [volsizeunit='1024 Bytes'] index: 7 [from value] + Found item [volsizeunit='1024 Bytes'] index: 10 [from value] + Found item [volsizeunit='4096 Bytes'] index: 31 [from value] + Found item [volsizeunit='4096 Bytes'] index: 32 [from value] + Found item [volsizeunit='4096 Bytes'] index: 33 [from value] + Located input field 'volsize' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.5' + Found item [volsize='1034712'] index: 1 [from value] + Found item [volsize='3131792'] index: 3 [from value] + Found item [volsize='1034712'] index: 6 [from value] + Found item [volsize='775904'] index: 7 [from value] + Found item [volsize='2097080'] index: 10 [from value] + Found item [volsize='612766'] index: 31 [from value] + Found item [volsize='1439812394'] index: 32 [from value] + Found item [volsize='1439812394'] index: 33 [from value] + Located input field 'volused' [walk] + Executing SNMP walk for data @ '.1.3.6.1.2.1.25.2.3.1.6' + Found item [volused='1022520'] index: 1 [from value] + Found item [volused='1024096'] index: 3 [from value] + Found item [volused='32408'] index: 6 [from value] + Found item [volused='775904'] index: 7 [from value] + Found item [volused='1576'] index: 10 [from value] + Found item [volused='148070'] index: 31 [from value] + Found item [volused='682377865'] index: 32 [from value] + Found item [volused='682377865'] index: 33 [from value] AS you can see it appears to be returning the correct data. I've also set up data templates and graph templates to display the data. The create graphs for a device screen shows the correct data, and when selecting one row can clicking create a new data source and graph are created. Unfortunately the data source is never updated. Increasing the poller log level shows that it appears to not even be querying the data source, despite it being used? What should my next steps to debug this issue be?

    Read the article

  • Remote Desktop failed logon event 4625 not logging correctly on 2008 Terminal Services server

    - by Zone12
    When I use the new remote desktop with ssl and try to log on with bad credentials it logs a 4625 event as expected. The problem is, it doesn't log the ip address, so I can't block malicious logons in our firewall. The event looks like this: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{00000000-0000-0000-0000-000000000000}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2012-04-13T06:52:36.499113600Z" /> <EventRecordID>467553</EventRecordID> <Correlation /> <Execution ProcessID="544" ThreadID="596" /> <Channel>Security</Channel> <Computer>ontheinternet</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">notauser</Data> <Data Name="TargetDomainName">MYSERVER-PC</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc0000064</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">NtLmSsp</Data> <Data Name="AuthenticationPackageName">NTLM</Data> <Data Name="WorkstationName">MYSERVER-PC</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">-</Data> <Data Name="IpPort">-</Data> </EventData> </Event> It seems because the logon type is 3 and not 10 like the old rdp sessions, the ip address and other information is not stored. The machine I am trying to connect from is on the internet and not on the same network as the server. Does anyone know where this information is stored (and what other events are generated with a failed logon)? Any help will be much appreciated.

    Read the article

  • Windows 7 explorer always crashes, opens small "Personalized Settings" window

    - by Ian Sellar
    My Windows 7 desktop PC, built by me, started acting very weird in the last couple of days. I use it quite often, about half of the time through TeamViewer. Explorer would crash and restart randomly, almost always through TeamViewer. This made me suspect that TeamViewer was the problem but I have reproduced it with and without TeamViewer several times. The only way I can seem to get the problem not to occur is by booting into Safe Mode. I have used CCleaner and Malwarebytes to make sure it wasn't a registry error or malware causing the problem, and I have tried the fix in the seemly related issue here as well every other fix I have found online including removing security updates KB980408 and KB2926765 as well as using "sfc /scannow" and a bunch of other things I can't remember. More recently when I try to start explorer it is popping up a small window that says "Personalized Settings" on the top, but is completely empty and crashes instantly. The only way I can get it to disappear is to kill the explorer.exe process. I wish I could take a screenshot but I can't seem to open paint or even find the exe. I have tried restarting it, I have tried starting it while the personalized settings window was open. I have come up with two lists of processes the first is the list of active processes when I boot into safe mode and explorer seems to work fine. The second is the list of processes that I can narrow it down to in normal boot and still replicate the problem. There is one process that I can't seem to close. NisSrv.exe which is describes as "Microsoft Network Realtime Inspection Service". When I try to close the process NisSrv.exe it says "The operation could not be completed. Access is denied." When I try to close the related service it gives the same message. Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 2,660 K smss.exe 304 Services 0 1,196 K csrss.exe 408 Services 0 4,156 K wininit.exe 444 Services 0 4,608 K csrss.exe 452 Console 1 8,700 K services.exe 492 Services 0 7,700 K winlogon.exe 524 Console 1 5,756 K lsass.exe 536 Services 0 10,644 K lsm.exe 544 Services 0 4,316 K svchost.exe 652 Services 0 8,976 K MsMpEng.exe 804 Services 0 40,696 K explorer.exe 1332 Console 1 85,220 K ctfmon.exe 1376 Console 1 3,680 K dllhost.exe 1624 Console 1 8,656 K chrome.exe 1408 Console 1 98,504 K WmiPrvSE.exe 2352 Services 0 6,472 K chrome.exe 1744 Console 1 65,116 K taskmgr.exe 372 Console 1 14,948 K cmd.exe 2776 Console 1 2,960 K conhost.exe 1816 Console 1 3,580 K tasklist.exe 2308 Console 1 5,868 K And the list of processes I have narrowed it down to. Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 2,808 K smss.exe 316 Services 0 1,216 K csrss.exe 484 Services 0 4,532 K wininit.exe 596 Services 0 4,604 K csrss.exe 604 Console 1 23,676 K services.exe 652 Services 0 11,344 K lsass.exe 668 Services 0 12,692 K lsm.exe 676 Services 0 4,464 K MsMpEng.exe 972 Services 0 68,436 K winlogon.exe 168 Console 1 7,784 K svchost.exe 496 Services 0 19,140 K NisSrv.exe 3176 Services 0 808 K svchost.exe 1684 Services 0 11,260 K taskmgr.exe 4524 Console 1 20,696 K cmd.exe 4764 Console 1 7,224 K conhost.exe 4772 Console 1 6,916 K sublime_text.exe 2340 Console 1 45,012 K dllhost.exe 4476 Console 1 8,736 K tasklist.exe 3796 Console 1 5,768 K WmiPrvSE.exe 1768 Services 0 6,344 K Here is the event data xml from event viewer for the error I am getting. <EventData> <Data>explorer.exe</Data> <Data>6.1.7601.17567</Data> <Data>4d672ee4</Data> <Data>vrfcore.dll</Data> <Data>6.3.9600.16384</Data> <Data>5215f8f5</Data> <Data>80000003</Data> <Data>0000000000003a00</Data> <Data>12e4</Data> <Data>01cfb84fa70f89dc</Data> <Data>C:\Windows\system32\explorer.exe</Data> <Data>C:\Windows\SYSTEM32\vrfcore.dll</Data> <Data>e5957093-2442-11e4-9f8a-94de806ed9cb</Data> </EventData> I was looking through the eventvwr log again and I found this, possibly related <EventData> <Data>runonce.exe</Data> <Data>6.1.7601.17514</Data> <Data>4ce7a253</Data> <Data>MSVCR100.dll</Data> <Data>10.0.40219.325</Data> <Data>4df2bcac</Data> <Data>c0000005</Data> <Data>000000000003c145</Data> <Data>670</Data> <Data>01cfb8dabbd85942</Data> <Data>C:\Windows\system32\runonce.exe</Data> <Data>C:\Windows\system32\MSVCR100.dll</Data> <Data>fa6f82b9-24cd-11e4-80a8-94de806ed9cb</Data> </EventData> And the general error details Faulting application name: Explorer.EXE, version: 6.1.7601.17567, time stamp: 0x4d672ee4 Faulting module name: vrfcore.dll, version: 6.3.9600.16384, time stamp: 0x5215f8f5 Exception code: 0x80000003 Fault offset: 0x0000000000003a00 Faulting process id: 0xc38 Faulting application start time: 0x01cfb84e5e852c5f Faulting application path: C:\Windows\Explorer.EXE Faulting module path: C:\Windows\SYSTEM32\vrfcore.dll Report Id: 9dc19e6d-2441-11e4-9f8a-94de806ed9cb Another probably unrelated error that I seem to be getting pretty often. Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage > 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. My explorer tab in Autoruns seen below along with the error when I try to uncheck something. I should add that I seem to be able to disable shell extensions with ShellExView but I still can't get explorer to start correctly. EXPLORER SHELL UPDATE - See screenshot below I can access the explorer right click menu through a file manager I downloaded called NexusFile, but still no luck starting explorer. Another round of errors that I am getting regarding Windows Search Service The search service has detected corrupted data files in the index {id=4700}. The service will attempt to automatically correct this problem by rebuilding the index. Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) followed by The Windows Search Service is being stopped because there is a problem with the indexer: The catalog is corrupt. Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801 and The plug-in in <Search.JetPropStore> cannot be initialized. Context: Windows Application, SystemIndex Catalog Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) and The gatherer object cannot be initialized. Context: Windows Application, SystemIndex Catalog Details: The content index catalog is corrupt. (HRESULT : 0xc0041801) (0xc0041801) and The Windows Search Service cannot load the property store information. Context: Windows Application, SystemIndex Catalog Details: The content index database is corrupt. (HRESULT : 0xc0041800) (0xc0041800) WER Log http://pastebin.com/WXKGDT4Q I'll add information as I remember it or people request it.

    Read the article

  • Backup data rate on Raspberry Pi maxing out at 5 Mb/s. Why?

    - by bastibe
    I set up my Raspberry Pi as a Time Machine, as documented here. At the moment, the Raspberry Pi is connected to my MacBook Pro using a direct Ethernet cable. Also, an external hard drive (laptop drive) is connected to the Raspberry Pi using the USB port. However, backups are pretty slow. Activity Monitor claims that the Network is transferring a very steady 5 Mb/s, where my Time Capsule is transferring up to 8 Mb/s with a lot of fluctuation. The Raspberry Pi self-reports (top) that its CPU is only half-used, with about equal parts afpd, usb-storage and jbd2/sda1-8. Thus, I think that the processing power of the Raspberry Pi does not seem to be the problem here. To me, this looks like there is some kind of bottleneck that maxes out at 5 Mb/s thus potentially having my backups run at less than their potential speed. To the best of my knowledge, this might be the afp-daemon, the usb-bus or the external hard drive. So, my question is, how could I identify the true culprit and what can I do about it?

    Read the article

  • Problem posting multipart form data using Apache with mod_proxy to a mongrel instance

    - by Ryan E
    I am attempting to simulate my site's production environment as closely as I can on my local machine. This is a rails site that uses Apache w/ mod_proxy to forward requests to a mongrel cluster. On my Mac OSX Leopard machine, I have the default install of apache running and have configured a vhost to use mod_proxy to to forward requests to a local running mongrel instance on port 3000. <Proxy balancer://mongrel_cluster-development> BalancerMember http://127.0.0.1:3000 </Proxy> For the most part, this is working fine. I can browse my development site using the ServerName of the vhost I configured and can confirm that requests are being properly forwarded to the mongrel instance. However, there is a page on the site that has a multipart form that is used to upload an image to the server. When I post this form, there is a delay of about 5 minutes and the browser ultimately returns a Bad Request Your browser sent a request that this server could not understand. In the error log for my vhost: [Tue Sep 22 09:47:57 2009] [error] (70007)The timeout specified has expired: proxy: prefetch request body failed to 127.0.0.1:3000 (127.0.0.1) from ::1 () This same form works fine if I browse directly to the mongrel instance (http://127.0.0.1:3000). Anybody have any idea what the problem might be and how to fix it? If there is any important information that I neglected to include, post a comment, and I can add to this question. Note: Upon further investigation, this appears to be a problem specific to Safari. The form works fine in Firefox.

    Read the article

  • Excel: What formula combines this data into one COUNT amount?

    - by Mike
    I have 30 colleagues who are answering questions over 3 time periods. Each has their own Excel workbook with the questions, and over the year they update it. I collate their worksheets into one master worksheet, but now need to combine their answers into a simple table. The questions, the time periods and then a COUNT of how many answered it. For example: I need a table that shows me how many people (not the persons name at this point) answered question 10 in time period 2. I can't use a database before someone mentions it ;). Many thanks Mike.

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • S#harp architecture mapping many to many and ado.net data services: A single resource was expected f

    - by Leg10n
    Hi, I'm developing an application that reads data from a SQL server database (migrated from a legacy DB) with nHibernate and s#arp architecture through ADO.NET Data services. I'm trying to map a many-to-many relationship. I have a Error class: public class Error { public virtual int ERROR_ID { get; set; } public virtual string ERROR_CODE { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<ErrorGroup> GROUPS { get; protected set; } } And then I have the error group class: public class ErrorGroup { public virtual int ERROR_GROUP_ID {get; set;} public virtual string ERROR_GROUP_NAME { get; set; } public virtual string DESCRIPTION { get; set; } public virtual IList<Error> ERRORS { get; protected set; } } And the overrides: public class ErrorGroupOverride : IAutoMappingOverride<ErrorGroup> { public void Override(AutoMapping<ErrorGroup> mapping) { mapping.Table("ERROR_GROUP"); mapping.Id(x => x.ERROR_GROUP_ID, "ERROR_GROUP_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<Error>(x => x.Error) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_GROUP_ID") .ChildKeyColumn("ERROR_ID").Inverse().AsBag(); } } public class ErrorOverride : IAutoMappingOverride<Error> { public void Override(AutoMapping<Error> mapping) { mapping.Table("ERROR"); mapping.Id(x => x.ERROR_ID, "ERROR_ID"); mapping.IgnoreProperty(x => x.Id); mapping.HasManyToMany<ErrorGroup>(x => x.GROUPS) .Table("ERROR_GROUP_LINK") .ParentKeyColumn("ERROR_ID") .ChildKeyColumn("ERROR_GROUP_ID").AsBag(); } } When I view the Data service in the browser like: http://localhost:1905/DataService.svc/Errors it shows the list of errors with no problems, and using it like http://localhost:1905/DataService.svc/Errors(123) works too. The Problem When I want to see the Errors in a group or the groups form an error, like: "http://localhost:1905/DataService.svc/Errors(123)?$expand=GROUPS" I get the XML Document, but the browser says: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. -------------------------------------------------------------------------------- Only one top level element is allowed in an XML document. Error processing resource 'http://localhost:1905/DataServic... <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> -^ I view the sourcecode, and I get the data. However it comes with an exception: <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata"> <code></code> <message xml:lang="en-US">An error occurred while processing this request.</message> <innererror xmlns="xmlns"> <message>A single resource was expected for the result, but multiple resources were found.</message> <type>System.InvalidOperationException</type> <stacktrace> at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved)&#xD; at System.Data.Services.ResponseBodyWriter.Write(Stream stream)</stacktrace> </innererror> </error> A I missing something??? Where does this error come from?

    Read the article

  • Data Warehouse ETL slow - change primary key in dimension?

    - by Jubbles
    I have a working MySQL data warehouse that is organized as a star schema and I am using Talend Open Studio for Data Integration 5.1 to create the ETL process. I would like this process to run once per day. I have estimated that one of the dimension tables (dimUser) will have approximately 2 million records and 23 columns. I created a small test ETL process in Talend that worked, but given the amount of data that may need to be updated daily, the current performance will not cut it. It takes the ETL process four minutes to UPDATE or INSERT 100 records to dimUser. If I assumed a linear relationship between the count of records and the amount of time to UPDATE or INSERT, then there is no way the ETL can finish in 3-4 hours (my hope), let alone one day. Since I'm unfamiliar with Java, I wrote the ETL as a Python script and ran into the same problem. Although, I did discover that if I did only INSERT, the process went much faster. I am pretty sure that the bottleneck is caused by the UPDATE statements. The primary key in dimUser is an auto-increment integer. My friend suggested that I scrap this primary key and replace it with a multi-field primary key (in my case, 2-3 fields). Before I rip the test data out of my warehouse and change the schema, can anyone provide suggestions or guidelines related to the design of the data warehouse the ETL process how realistic it is to have an ETL process INSERT or UPDATE a few million records each day will my friend's suggestion significantly help If you need any further information, just let me know and I'll post it. UPDATE - additional information: mysql> describe dimUser; Field Type Null Key Default Extra user_key int(10) unsigned NO PRI NULL auto_increment id_A int(10) unsigned NO NULL id_B int(10) unsigned NO NULL field_4 tinyint(4) unsigned NO 0 field_5 varchar(50) YES NULL city varchar(50) YES NULL state varchar(2) YES NULL country varchar(50) YES NULL zip_code varchar(10) NO 99999 field_10 tinyint(1) NO 0 field_11 tinyint(1) NO 0 field_12 tinyint(1) NO 0 field_13 tinyint(1) NO 1 field_14 tinyint(1) NO 0 field_15 tinyint(1) NO 0 field_16 tinyint(1) NO 0 field_17 tinyint(1) NO 1 field_18 tinyint(1) NO 0 field_19 tinyint(1) NO 0 field_20 tinyint(1) NO 0 create_date datetime NO 2012-01-01 00:00:00 last_update datetime NO 2012-01-01 00:00:00 run_id int(10) unsigned NO 999 I used a surrogate key because I had read that it was good practice. Since, from a business perspective, I want to keep aware of potential fraudulent activity (say for 200 days a user is associated with state X and then the next day they are associated with state Y - they could have moved or their account could have been compromised), so that is why geographic data is kept. The field id_B may have a few distinct values of id_A associated with it, but I am interested in knowing distinct (id_A, id_B) tuples. In the context of this information, my friend suggested that something like (id_A, id_B, zip_code) be the primary key. For the large majority of daily ETL processes (80%), I only expect the following fields to be updated for existing records: field_10 - field_14, last_update, and run_id (this field is a foreign key to my etlLog table and is used for ETL auditing purposes).

    Read the article

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

  • Do you need all that data?

    - by BuckWoody
    I read an amazing post over on ars technica (link: http://arstechnica.com/science/news/2010/03/the-software-brains-behind-the-particle-colliders.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss) abvout the LHC, or as they are also known, the "particle colliders". Beyond just the pure scientific geek awesomeness, these instruments have the potential to collect more data than you can (or possibly should) store. Actually, this problem has a lot in common with a BI system. There's so much granular detail available in the source systems that a designer has to decide how, and how much, to roll up the data. Whenver you do that, you lose fidelity, but in many cases that's OK. Take, for example, your car's speedometer. You don't actually need to track each and every point of speed as it happens. You only need to know that you're hovering around the speed limit at a certain point in time. Since this is the way that humans percieve data, is there some lesson we should take in the design of data "flows" - and what implications does this have for new technologies like StreamInsight? Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Standards Support, Protocol, Data Portability – 3 Important SQL Server Documentations for Downloads

    - by pinaldave
    I have been working with SQL Server for more than 8 years now continuously and I like to read a lot. Some time I read easy things and sometime I read stuff which are not so easy.  Here are few recently released article which I referred and read. They are not easy read but indeed very important read if you are the one who like to read things which are more advanced. SQL Server Standards Support Documentation The SQL Server standards support documentation provides detailed support information for certain standards that are implemented in Microsoft SQL Server. Microsoft SQL Server Protocol Documentation The Microsoft SQL Server protocol documentation provides technical specifications for Microsoft proprietary protocols that are implemented and used in Microsoft SQL Server 2008. Microsoft SQL Server Data Portability Documentation The SQL Server data portability documentation explains various mechanisms by which user-created data in SQL Server can be extracted for use in other software products. These mechanisms include import/export functionality, documented APIs, industry standard formats, or documented data structures/file formats. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >