Search Results

Search found 59060 results on 2363 pages for 'dummy data'.

Page 128/2363 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Transferring Microsoft CRM data

    - by notCRMguru
    My boss has asked me to transfer data from the current Microsoft CRM 4.0 server to a new one. I myself haven't used CRM at all. I've done some research and come across various ways to import data from different sources. These methods include using CSV files and Data Maps. This seems very cumbersome and unnecessary since the data is already in a CRM. Would someone please direct to some guides for full/partial data transferral from this current CRM to a new one? Thanks

    Read the article

  • Bluimp file upload send data to the server on .fileupload

    - by MyName
    OK I've Googled...Googled and Googled again with no avail on how I can send data to the server like an ajax call's data option for the file upload: $('#file_upload').fileupload({ dataType: 'json', url: "@(Url.Action("UploadFiles", "ExcelUpload"))", // formData: function(form) { return [{ name: "dataTable", value : "@(Model)"}];}, progressall: function (e, data) { $(this).find('.progressbar').progressbar({ value: parseInt(data.loaded / data.total * 100, 10) }); }, done: function (e, data) { BadFile(e, data); } }); controller would look something like this: [HttpPost] public ContentResult UploadFiles(Mytype param1, MyType Param2) { .. } I want to do something similar to this: $.ajax({ url: "@(Url.Action("Action", "Controller"))", type: "post", data: { param1: value, param2: @(Model) } }); On the fileupload callback. Is this possible? How can I pass values to the ServerSide? Should I switch to a different uploader..? Please help me out! I need to resolve this as soon as possible.

    Read the article

  • How do I improve the efficiency of the queries executed by this generic Linq-to-SQL data access clas

    - by Lee D
    Hi all, I have a class which provides generic access to LINQ to SQL entities, for example: class LinqProvider<T> //where T is a L2S entity class { DataContext context; public virtual IEnumerable<T> GetAll() { return context.GetTable<T>(); } public virtual T Single(Func<T, bool> condition) { return context.GetTable<T>().SingleOrDefault(condition); } } From the front end, both of these methods appear to work as you would expect. However, when I run a trace in SQL profiler, the Single method is executing what amounts to a SELECT * FROM [Table], and then returning the single entity that meets the given condition. Obviously this is inefficient, and is being caused by GetTable() returning all rows. My question is, how do I get the query executed by the Single() method to take the form SELECT * FROM [Table] WHERE [condition], rather than selecting all rows then filtering out all but one? Is it possible in this context? Any help appreciated, Lee

    Read the article

  • RijndaelManaged Padding when data matches block size

    - by trampster
    If I use PKCS7 padding in RijndaelManaged with 16 bytes of data then I get 32 bytes of data output. It appears that for PKCS7 when the data size matches the block size it adds a whole extra block of data. If I use Zeros padding for 16 bytes of data I get out 16 bytes of data. So for Zeros padding if the data matches the block size then it doesn't pad. I have searched through the documentation and it says nothing about this difference in padding behavior. Can someone please point me to some kind of documentation which specifies what the padding behavior should be for the different padding modes when the data size matches the block size.

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • AIR: sync gui with data-base?

    - by John Isaacks
    I am going to be building an AIR application that shows a list (about 1-25 rows of data) from a data-base. The data-base is on the web. I want the list to be as accurate as possible, meaning as soon as the data-base data changes, the list displayed in the app should update asap. I do not know of anyway that the air application could be notified when there is a change, I am thinking I am going to have to poll the data-base at certain intervals to keep an up to date list. So my question is, first is there any way to NOT have to keep checking the data-base? or if I do keep have to keep checking the data-base what is a reasonable interval to do that at? Thanks.

    Read the article

  • changing user in ubuntu

    - by Rahul Mehta
    Hi , this is my ls -all, the zfapi folder have the root right , how can i change this to www-data. Also Please advise what is the first root and secont root is ? Thanks drwxr-xr-x 4 www-data www-data 4096 2011-01-06 18:21 cdnapi -rw-r--r-- 1 www-data www-data 678 2010-08-30 12:02 config.js drwxr-xr-x 4 www-data www-data 4096 2010-11-23 15:55 css drwxr-xr-x 7 www-data www-data 4096 2010-11-17 13:12 images -rw-r--r-- 1 www-data www-data 25064 2010-12-17 18:26 index.html -rw-r--r-- 1 www-data www-data 19830 2010-12-18 11:24 init.js drwxr-xr-x 2 www-data www-data 4096 2010-12-02 12:34 lib -rw-r--r-- 1 www-data www-data 18758 2010-12-06 18:00 styles.css -rw-r--r-- 1 www-data www-data 1081 2010-10-21 17:56 testbganim.html drwxr-xr-x 2 www-data www-data 4096 2010-12-17 11:15 yapi drwxr-xr-x 7 root root 4096 2011-01-07 18:20 zfapi

    Read the article

  • HLSL How can one pass data between shaders / read existing colour value?

    - by RJFalconer
    Hello all, I have 2 HLSL ps2.0 shaders. Simplified, they are: Shader 1 Reads texture Outputs colour value based on this texture Shader 2 Needs to read in existing colour (or have it passed in/read from a register) Outputs the final colour which is a function of the previous colour (They need to be different shaders as I've reached the maximum vertex-shader outputs for 1 shader) My problem is I cannot work out how Shader 2 can access the existing fragment/pixel colour. Is the only way for shaders to interact really just the alpha blending options? These aren't sufficient if I want to use the colour as input to my function.

    Read the article

  • Use queried json data in a function

    - by SztupY
    I have a code similar to this: $.ajax({ success: function(data) { text = ''; for (var i = 0; i< data.length; i++) { text = text + '<a href="#" id="Data_'+ i +'">' + data[i].Name + "</a><br />"; } $("#SomeId").html(text); for (var i = 0; i< data.length; i++) { $("#Data_"+i).click(function() { alert(data[i]); RunFunction(data[i]); return false; }); } } }); This gets an array of some data in json format, then iterates through this array generating a link for each entry. Now I want to add a function for each link that will run a function that does something with this data. The problem is that the data seems to be unavailable after the ajax success function is called (although I thought that they behave like closures). What is the best way to use the queried json data later on? (I think setting it as a global variable would do the job, but I want to avoid that, mainly because this ajax request might be called multiple times) Thanks.

    Read the article

  • How do I display data in a table and allow users to copy selected data?

    - by cfouche
    Hi I have a long list of data that I want to display in table format to users. The data changes when the user performs certain actions in my app, but it is not directly editable. So the user can create a reasonably big table of data, but he can't change individual cells' values. However, I do want the data to be copy-able. So I want it to be possible for the user to select some or all of the cells, and do a ctrl-C to copy the data to his clipboard, and then a ctrl-V to paste the data to an external text editor. At the moment, I'm displaying the data in a ListView with a GridView and this works perfectly, except that GridView doesn't allow one to copy data. What other options can I try? Ours is a WPF app, coding in c#.

    Read the article

  • How to use a data type (table) defined in another database in SQL2k8?

    - by Victor Rodrigues
    I have a Table Type defined in a database. It is used as a table-valued parameter in a stored procedure. I would like to call this procedure from another database, and in order to pass the parameter, I need to reference this defined type. But when I do DECLARE @table dbOtherDatabase.dbo.TypeName , it tells me that The type name 'dbOtherDatabase.dbo.TypeName' contains more than the maximum number of prefixes. The maximum is 1. How could I reference this table type?

    Read the article

  • After adding data files to a file group Is there a way to distribute the data into the new files?

    - by Blootac
    I have a database in a single file group, with a single file group. I've added 7 data files to this file group. Is there a way to rebalance the data over the 8 data files other than by telling sql server to empty the original? If this is the only way, is it possible to allow sql server to start writing to this file? MSDN says that once its empty its marked so no new data will be written to it. What I'm aiming for is 8 equally balanced data files. I'm running SQL Server 2005 standard edition. Thanks

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • How to get user input for 2 digit data

    - by oneMinute
    In a HTML form user is expect to fill / select some data and trigger an action probably a http-post. If your only requested data field is a "2 digit" you can use html text input element get some data. Then you want to make it useful; enable user easily select data from a 'html select' But not all of your data is well-ordered so eye-searching within these data is somehow cumbersome. Because your data is meaningful with its relations. If there is no primary key for foreign key "12" it should not be shown. Vice versa if this foreign key occurs a lot, then it has some weight and could be displayed with more importance. So, what will be your way? a) Use text input to get data and validate it with regex, javascript, ... b) Use some dropdown select. c) Any other way ? Any answer will appreciated :)

    Read the article

  • Is it bad practice to use an enum that maps to some seed data in a Database?

    - by skb
    I have a table in my database called "OrderItemType" which has about 5 records for the different OrderItemTypes in my system. Each OrderItem contains an OrderItemType, and this gives me referential integrity. In my middletier code, I also have an enum which matches the values in this table so that I can have business logic for the different types. My dev manager says he hates it when people do this, and I am not exactly sure why. Is there a better practice I should be following?

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • How does a custom accessor method implementation in Core Data look like?

    - by dontWatchMyProfile
    The documentation is pretty confusing on this one: The implementation of accessor methods you write for subclasses of NSManagedObject is typically different from those you write for other classes. If you do not provide custom instance variables, you retrieve property values from and save values into the internal store using primitive accessor methods. You must ensure that you invoke the relevant access and change notification methods (willAccessValueForKey:, didAccessValueForKey:, willChangeValueForKey:, didChangeValueForKey:, willChangeValueForKey:withSetMutation:usingObjects:, and didChangeValueForKey:withSetMutation:usingObjects:). NSManagedObject disables automatic key-value observing (KVO, see Key-Value Observing Programming Guide) change notifications, and the primitive accessor methods do not invoke the access and change notification methods. In accessor methods for properties that are not defined in the entity model, you can either enable automatic change notifications or invoke the appropriate change notification methods. Are there any examples that show how these look like?

    Read the article

  • What are alternatives to standard ORM in a data access layer?

    - by swampsjohn
    We're all familiar with basic ORM with relational databases: an object corresponds to a row and an attribute in that object to a column (or some slight variation), though many ORMs add a lot of bells and whistles. I'm wondering what other alternatives there are (besides raw access to the database or whatever you're working with). Alternatives that just work with relational databases would be great, but ones that could encapsulate multiple types of backends besides just SQL (such as flat files, RSS, NoSQL, etc.) would be even better. I'm more interested in ideas rather than specific implantations and what languages/platforms they work with, but please link to anything you think is interesting.

    Read the article

  • Dynamically add data stored in php to nested json

    - by HoGo
    I am trying to dynamicaly generate data in json for jQuery gantt chart. I know PHP but am totally green with JavaScript. I have read dozen of solutions on how dynamicaly add data to json, and tried few dozens of combinations and nothing. Here is the json format: var data = [{ name: "Sprint 0", desc: "Analysis", values: [{ from: "/Date(1320192000000)/", to: "/Date(1322401600000)/", label: "Requirement Gathering", customClass: "ganttRed" }] },{ name: " ", desc: "Scoping", values: [{ from: "/Date(1322611200000)/", to: "/Date(1323302400000)/", label: "Scoping", customClass: "ganttRed" }] }, <!-- Somoe more data--> }]; now I have all data in php db result. Here it goes: $rows=$db->fetchAllRows($result); $rowsNum=count($rows); And this is how I wanted to create json out of it: var data=''; <?php foreach ($rows as $row){ ?> data['name']="<?php echo $row['name'];?>"; data['desc']="<?php echo $row['desc'];?>"; data['values'] = {"from" : "/Date(<?php echo $row['from'];?>)/", "to" : "/Date(<?php echo $row['to'];?>)/", "label" : "<?php echo $row['label'];?>", "customClass" : "ganttOrange"}; } However this does not work. I have tried without loop and replacing php variables with plain text just to check, but it did not work either. Displays chart without added items. If I add new item by adding it to the list of values, it works. So there is no problem with the Gantt itself or paths. Based on all above I assume the problem is with adding plain data to json. Can anyone please help me to fix it?

    Read the article

  • How can I make Excel correlate data from two data sets into a single graph?

    - by Tom Ritter
    I have two datasets, one being sparser than the other. They look like this: Data Set 1: 4 50 5 55 6 60 7 70 8 80 Data Set 2: 4 10 6 20 8 30 I have several hundred points instead of this few. I want them in the same graph, the X axis being 4-8, the y axis being 0-100ish, and two lines, one for each data set. What I get is two lines, not correlated at all along the X axis, and the X axis being labeled from one of the two datasets, with the labels being wrong for the other. The smaller data set is one-point-per-tick on the x axis, when I need it to skip ticks and actually line up with the other data set. Not married to excel, willing to try this in something else if it's free.

    Read the article

  • Repartition hard drive using Mac OS X, keep existing data

    - by Jonny
    I got a 1 TB disk a year or so ago and loaded it with some hundred of GB of data. I somehow neglected to check the file system, which turns out to be FAT-32 and thus too small for files bigger than 4 GB. So now I want to change it, without deleting the data. I thought I'd just make a new partition in the so far unused space. Then with the new partition, copy/move the data into the new partition, and then delete the old FAT-32 partition, and make the new partition bigger again... or just make a few more partitions. The critical step here is, can I make that new partition without ruining the data? The data should be fairly sequentially added to the start of the disk, but what do I know... so that's why I'm asking. Can I safely use Disk Utility for this? Any recommended file system?

    Read the article

  • Base64 Encoded Data - DB or Filesystem

    - by Marty
    I have a new program that will be generating a lot of Base64 encoded audio and image data. This data will be served via HTTP in the form of XML and the Base64 data will be inline. These files will most likely break 20MB and higher. Would it be more efficient to serve these files directly from the filesystem or would it be feasible to store the data in a MySQL database? Caching will be set up but overall unnecessary because it is likely that this data will be purged shortly after it is created and served. i know that storing binary data in the DB is frowned upon in most circumstances but since this will all be character data I want to see what the consensus is. As of now, I am leaning toward storing them in the filesystem for efficiency reasons but if it is feasible to store them in a database it would be much easier to manage the data.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >