Search Results

Search found 67192 results on 2688 pages for 'excel external data'.

Page 269/2688 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Javascript reference external script file - security implications

    - by rkrauter
    Hi, If I have a reference to an external third party JavaScript file on my website, what are the security implications? Can the JavaScript file be used to steal cookies? One example of this is the Google Analytics JavaScript reference file. Could the third party technically steal cookies or any other sensitive information from my logged on users (XSS)? The whole cross domain scripting has me confused sometimes. Thanks!

    Read the article

  • How does a custom accessor method implementation in Core Data look like?

    - by dontWatchMyProfile
    The documentation is pretty confusing on this one: The implementation of accessor methods you write for subclasses of NSManagedObject is typically different from those you write for other classes. If you do not provide custom instance variables, you retrieve property values from and save values into the internal store using primitive accessor methods. You must ensure that you invoke the relevant access and change notification methods (willAccessValueForKey:, didAccessValueForKey:, willChangeValueForKey:, didChangeValueForKey:, willChangeValueForKey:withSetMutation:usingObjects:, and didChangeValueForKey:withSetMutation:usingObjects:). NSManagedObject disables automatic key-value observing (KVO, see Key-Value Observing Programming Guide) change notifications, and the primitive accessor methods do not invoke the access and change notification methods. In accessor methods for properties that are not defined in the entity model, you can either enable automatic change notifications or invoke the appropriate change notification methods. Are there any examples that show how these look like?

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • Executing external executable to run and wait till it finishes and continue the setup using NSIS

    - by Ramesh
    I am new to NSIS install creator and I need to run an external executable because this is an prerequisite and once it if finished i will be continuing the setup. I tried the below code but it just copies the exe to the installation path. Section "example" example SetOutPath "$INSTDIR" MessageBox MB_OK \ "The applications." File "Prerequisites\setup.exe" ExecWait '"exec" /i "$INSTDIR\setup.exe" /passive' SetRebootFlag true SectionEnd

    Read the article

  • using external maps with bing map

    - by user230408
    can i use bing map platform with an external mapping source ? for example, i want to use bing map siverlight client with my own map files instead of the provided maps. (some areas coverage is insufficiant with bings provided mapping) Thanks.

    Read the article

  • Is it bad practice to use an enum that maps to some seed data in a Database?

    - by skb
    I have a table in my database called "OrderItemType" which has about 5 records for the different OrderItemTypes in my system. Each OrderItem contains an OrderItemType, and this gives me referential integrity. In my middletier code, I also have an enum which matches the values in this table so that I can have business logic for the different types. My dev manager says he hates it when people do this, and I am not exactly sure why. Is there a better practice I should be following?

    Read the article

  • How to get user input for 2 digit data

    - by oneMinute
    In a HTML form user is expect to fill / select some data and trigger an action probably a http-post. If your only requested data field is a "2 digit" you can use html text input element get some data. Then you want to make it useful; enable user easily select data from a 'html select' But not all of your data is well-ordered so eye-searching within these data is somehow cumbersome. Because your data is meaningful with its relations. If there is no primary key for foreign key "12" it should not be shown. Vice versa if this foreign key occurs a lot, then it has some weight and could be displayed with more importance. So, what will be your way? a) Use text input to get data and validate it with regex, javascript, ... b) Use some dropdown select. c) Any other way ? Any answer will appreciated :)

    Read the article

  • Best practice to include log4Net external config file in ASP.NET

    - by Martin Buberl
    I have seen at least two ways to include an external log4net config file in an ASP.NET web application: Having the following attribute in your AssemblyInfo.cs file: [assembly: log4net.Config.XmlConfigurator(ConfigFile = "Log.config", Watch = true)] Calling the XmlConfigurator in the Global.asax.cs: protected void Application_Start() { XmlConfigurator.Configure(new FileInfo("Log.config")); } What would be the best practice to do it?

    Read the article

  • What are alternatives to standard ORM in a data access layer?

    - by swampsjohn
    We're all familiar with basic ORM with relational databases: an object corresponds to a row and an attribute in that object to a column (or some slight variation), though many ORMs add a lot of bells and whistles. I'm wondering what other alternatives there are (besides raw access to the database or whatever you're working with). Alternatives that just work with relational databases would be great, but ones that could encapsulate multiple types of backends besides just SQL (such as flat files, RSS, NoSQL, etc.) would be even better. I'm more interested in ideas rather than specific implantations and what languages/platforms they work with, but please link to anything you think is interesting.

    Read the article

  • Copy and paste between sheets in a workbook with VBA code

    - by Hannah
    Trying to write a macro in VBA for Excel to look at the value in a certain column from each row of data in a list and if that value is "yes" then it copies and pastes the entire row onto a different sheet in the same workbook. Let's name the two sheets "Data" and "Final". I want to have the sheets referenced so it does not matter which sheet I have open when it runs the code. I was going to use a Do loop to cycle through the rows on the one data sheet until it finds there are no more entries, and if statements to check the values. I am confused about how to switch from one sheet to the next. How do I specifically reference cells in different sheets? Here is the pseudocode I had in mind: Do while DataCells(x,1).Value <> " " for each DataCells(x,1).Value="NO" if DataCells(x,2).Value > DataCells(x,3).Value or _ DataCells(x,4).Value < DataCells(x,5).Value 'Copy and paste/insert row x from Data to Final sheet adding a new 'row for each qualifying row else x=x+1 end else if DataCells(x,1).Value="YES" Loop 'copy and paste entire row to a third sheet 'continue this cycle until all rows in the data sheet are examined

    Read the article

  • Dynamically add data stored in php to nested json

    - by HoGo
    I am trying to dynamicaly generate data in json for jQuery gantt chart. I know PHP but am totally green with JavaScript. I have read dozen of solutions on how dynamicaly add data to json, and tried few dozens of combinations and nothing. Here is the json format: var data = [{ name: "Sprint 0", desc: "Analysis", values: [{ from: "/Date(1320192000000)/", to: "/Date(1322401600000)/", label: "Requirement Gathering", customClass: "ganttRed" }] },{ name: " ", desc: "Scoping", values: [{ from: "/Date(1322611200000)/", to: "/Date(1323302400000)/", label: "Scoping", customClass: "ganttRed" }] }, <!-- Somoe more data--> }]; now I have all data in php db result. Here it goes: $rows=$db->fetchAllRows($result); $rowsNum=count($rows); And this is how I wanted to create json out of it: var data=''; <?php foreach ($rows as $row){ ?> data['name']="<?php echo $row['name'];?>"; data['desc']="<?php echo $row['desc'];?>"; data['values'] = {"from" : "/Date(<?php echo $row['from'];?>)/", "to" : "/Date(<?php echo $row['to'];?>)/", "label" : "<?php echo $row['label'];?>", "customClass" : "ganttOrange"}; } However this does not work. I have tried without loop and replacing php variables with plain text just to check, but it did not work either. Displays chart without added items. If I add new item by adding it to the list of values, it works. So there is no problem with the Gantt itself or paths. Based on all above I assume the problem is with adding plain data to json. Can anyone please help me to fix it?

    Read the article

  • Base64 Encoded Data - DB or Filesystem

    - by Marty
    I have a new program that will be generating a lot of Base64 encoded audio and image data. This data will be served via HTTP in the form of XML and the Base64 data will be inline. These files will most likely break 20MB and higher. Would it be more efficient to serve these files directly from the filesystem or would it be feasible to store the data in a MySQL database? Caching will be set up but overall unnecessary because it is likely that this data will be purged shortly after it is created and served. i know that storing binary data in the DB is frowned upon in most circumstances but since this will all be character data I want to see what the consensus is. As of now, I am leaning toward storing them in the filesystem for efficiency reasons but if it is feasible to store them in a database it would be much easier to manage the data.

    Read the article

  • Repartition hard drive using Mac OS X, keep existing data

    - by Jonny
    I got a 1 TB disk a year or so ago and loaded it with some hundred of GB of data. I somehow neglected to check the file system, which turns out to be FAT-32 and thus too small for files bigger than 4 GB. So now I want to change it, without deleting the data. I thought I'd just make a new partition in the so far unused space. Then with the new partition, copy/move the data into the new partition, and then delete the old FAT-32 partition, and make the new partition bigger again... or just make a few more partitions. The critical step here is, can I make that new partition without ruining the data? The data should be fairly sequentially added to the start of the disk, but what do I know... so that's why I'm asking. Can I safely use Disk Utility for this? Any recommended file system?

    Read the article

  • How can I use different metadata for a Dynamic Data form view and the list view?

    - by ProfK
    We often need a summarised list view with more detailed form view. So far, the only two way I can think of doing this are using a 'custom' list or form view than uses a sumplimentary 'remove' or 'add' list of fields anb generalise this over all entity sets, or create a custom metadata provider that infers somehow which columns to supply. Are there any other ways of distinguishing these two views? PS, I wrote a fun little general details page, that handles insert, edit, and view, all on one page template. Maybe I could somehow use that? It's here.

    Read the article

  • server performance: multiple external connections and performance

    - by websiteguru
    I am creating a php script that requires the server to make several cURL requests per run. I'll be running this script through cron every 3 minutes. Im looking to maximize the amount of cURL requests I can make in a 24 hr period. What I am wondering is if it would be better from a performance standpoint to get a dedicated server, or several small shared hosting accounts. With the problem being number of external connections and not system resources I'm wondering which is the best approach.

    Read the article

  • IP address shows as a hyphen for failed remote desktop connections in Event Log

    - by PsychoDad
    I am trying to figure out why failed remote desktop connections (from Windows remote desktop) show the client ip address as a hyphen. Here is the event log I get when I type the wrong password for an account (the server is completely external to my home computer): <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2012-03-25T19:22:14.694177500Z" /> <EventRecordID>1658501</EventRecordID> <Correlation /> <Execution ProcessID="544" ThreadID="12880" /> <Channel>Security</Channel> <Computer>[Delete for Security Purposes]</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">[Delete for Security Purposes]</Data> <Data Name="TargetDomainName">[Delete for Security Purposes]</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc000006a</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">NtLmSsp </Data> <Data Name="AuthenticationPackageName">NTLM</Data> <Data Name="WorkstationName">MyComputer</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">-</Data> <Data Name="IpPort">-</Data> </EventData> </Event> Have found nothing online and am trying to stop terminal services attacks. Any insight is appreciated, I have found nothing online after several hours of seraching...

    Read the article

  • Vlookup using wildcards in indexed column

    - by Dm3k1
    I know how to use a wildcard with Vlookup on the reference value, but what about on the matched column index? I know you can do for instance VLOOKUP("*Hello*",A4:G4,2,FALSE) However, what if you wanted to match a cell that is "Hello", with another one that is "Why, Hello there!" (so the opposite i suppose) My data is set in a way where a macro is going to ask if A4 in workbook 1 matches C2:C25000 in workbook 2 to return the corresponding value in D back to workbook 1. The thought is that when A4 in workbook 1 says Its DHS Here, that I could input a value such as DHS in column C in workbook 2 and have it say its a match. Is this possible?

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Javascript/jQuery get external CSS value

    - by Acorn
    Is it possible to get a value from the external CSS of a page if the element that the style refers to has not been generated yet? (the element is to be generated dynamically). The jQuery method I've seen is $('element').css('property','value');, but this relies on element being on the page. Is there a way of finding out what the property is set to within the CSS rather than the computed style of an element?

    Read the article

  • How to create a template to display data from a class in WPF

    - by Dave Colwell
    Hey I have a data layer which is returning lists of classes containing data. I want to display this data in my form in WPF. The data is just properties on the class such as Class.ID, Class.Name, Class.Description (for the sake of example) How can i create a custom control or template an existing control so that it can be given one of these classes and display its data in a data-bound fashion. Thanks :)

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >