Search Results

Search found 70675 results on 2827 pages for 'master data management'.

Page 50/2827 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Event based PROJECT MANAGEMENT

    - by andreas
    Hey fellas! Where i am working we have a number of contractor programmers. We handle the requirements and the project management of our products. I have been trying using various project timing and estimations techniques but cant get the hang of it yet. I have read Joel's evidence based scheduling and i was wondering if there's anything out there that can help me apply that theory? not looking for complex software but perhaps something in excel? Any help will be appreciated Andreas

    Read the article

  • UIWebView memory management

    - by wolfrevo
    Hello, I have a problem with memory management. I am developing an application that makes heavy use of UIWebView. This app generates dynamically lots of UIWebViews while loading content from my server. Some of these UIWebViews are quite large and have a lot of pictures. If I use instruments to detect leaks, I do not detect any. However, lots of objects are allocated and I suspect that has to do with the UIWebViews. When the webviews release because no longer needed, it appears that not all memory is released. I mean, after a request to my server the app creates an UITableView and many webviews (instruments say about 8Mb). When user tap back, all of them are released but memory usage only decrements about 2-3 Mb, and after 5-10 minutes using the app it crashes. Am I missing something? Anyone know what could be happening? Thank you!

    Read the article

  • Looking for Light Time Management Software Suggestions (for Mac)

    - by tmo256
    I'm looking for a simple project management app that performs task scheduling, along the line of Merlin or MS Project, but no where near as robustly. I don't need to deal with other (human) resources, but I work on anything from 3 to 6 different projects at a time. What I'd like is to be able to input deadlines and tasks, and have a schedule suggested to complete them. I do technical work, but I don't think I need anything specifically for software development, especially considering I do plenty of other kinds of things, like graphic design and social media PR. I'd really like this to be dead simple, as simple as possible. Suggestions? OmniPlan, something web-based? Definitely cannot afford anything too extravagant, really looking for something under $200. Thanks for your input!

    Read the article

  • Group SQL tables in SQL Server Management Studio object explorer

    - by MainMa
    I have a table which has approximately sixty tables, and other tables are added constantly. Each table is a part of a schema. A such quantity of tables makes it difficult to use Microsoft SQL Server Management Studio 2008. For example, I must scroll up in object explorer to access database related functions, or scroll down each time I need to access Views or Security features. Is it possible to group several tables to be able to expand or collapse them in Object Explorer? Maybe a folder may be displayed for each schema, letting collapse the folders I don't need to use?

    Read the article

  • Objective-C Memory Management: When do I [release]?

    - by Sahat
    I am still new to this Memory Management stuff (Garbage Collector took care of everything in Java), but as far as I understand if you allocate memory for an object then you have to release that memory back to the computer as soon as you are finished with your object. myObject = [Object alloc]; and [myObject release]; Right now I just have 3 parts in my Objective-C .m file: @Interface, @Implementation and main. I released my object at the end of the program next to these guys: [pool drain]; return 0; But what if this program were to be a lot more complicated, would it be okay to release myObject at the end of the program? I guess a better question would be when do I release an object's allocated memory? How do I know where to place [myObject release];?

    Read the article

  • Project tracking/management tool

    - by Alvaro Rodriguez
    Which project tracking tool do you use? Does it allow programmers to bill hours worked to projects/tasks? Does it allow to track items promised vs. items delivered? Does it allow to forecast personnel needs when you have only a ballpark estimate of how many hours you are going to need for different task types? Does it integrate with your bug tracker? [Mantis] Does it integrate with your source control tool? [Subversion] Does it allow you to easily publish your schedule and current priorities to team members? Does it produce reports on any or all of the above? Am I even right in calling a tool that does those things a "Project tracking/management tool"? What does your tool have that makes you love it and use it every day? I don't really care about Gantt charts. I find Microsoft Project quite clunky for this needs (although I'm hardly an expert user).

    Read the article

  • Problem exporting SQL Server management Studio Express to Go Daddy

    - by brohjoe
    I'm having a terrible time exporting SQL Server Management Studio Express tables to the Go Daddy webserver. Go Daddy support can't help either. I started by using Microsoft Database Publishing Wizard for SQL Server thinking it would be 'easy'....not! I ran into user/password errors even though I was using the user and password that was created for the SQL database in the Go Daddy site. I called help desk support at Go Daddy and went through several iterations of processes to get the thing working but it didn't. Finally, the support guy acted like his phone went on the blip and scuttled away. There has got to be someway to upload SQL Server to a webserver without a lot of drama. Any suggestions?

    Read the article

  • AS3 Memory management when instantiating extended classes

    - by araid
    I'm developing an AS3 application which has some memory leaks I can't find, so I'd like to ask some newbie questions about memory management. Imagine I have a class named BaseClass, and some classes that extend this one, such as ClassA, ClassB, etc. I declare a variable: myBaseClass:BaseClass = new ClassA(); After a while, I use it to instantiate a new object: myBaseClass = new ClassB(); some time after myBaseClass = new ClassC(); and the same thing keeps happening every x millis, triggered by a timer. Is there any memory problem here? Are the unused instances correctly deleted by the garbage collector? Thanks!

    Read the article

  • Management Studio default file save location

    - by jayrdub
    Open a new query window. Write some SQL. Save the script, the Save File As dialog box opens - but always to the same default location in the Profiles directory. Is there any way to set my default file location? ...Like I used to do with apps from the 1980s? Under Tools|Options a default location can be specified for query results. I need the same thing for new queries (the text editor). Tried changing locations in the Registry but SSMS just overwrote my changes. Any suggestions? (I saw this unanswered question at http://www.eggheadcafe.com/software/aspnet/30098335/management-studio-default.aspx and I had same exact question so I just reposted it here)

    Read the article

  • Group SQL tables in Microsoft SQL Server Management Studio object explorer

    - by MainMa
    I have a table which has approximately sixty tables, and other tables are added constantly. Each table is a part of a schema. A such quantity of tables makes it difficult to use Microsoft SQL Server Management Studio 2008. For example, I must scroll up in object explorer to access database related functions, or scroll down each time I need to access Views or Security features. Is it possible to group several tables to be able to expand or collapse them in Object Explorer? Maybe a folder may be displayed for each schema, letting collapse the folders I don't need to use?

    Read the article

  • ClearCase UCM Mainline Configuration Management Pattern Question

    - by cogmios
    A configuration management pattern question (using Rational ClearCase UCM) When I use the mainline approach I create new releases by: - create release 1 from mainline - on a certain moment baseline release 1, deliver release 1 to mainline - create release 2 from mainline - on a certain moment baseline release 2, deliver release 2 to mainline - create release 3 from mainline - etc... Works very nice because the pathname is /main/release 3/latest instead of /main/release 1/release 2/release 3/latest etc... However... when in release 1 are new elements that have to be propagated to later releases I can not use the mainline since the mainline is already on e.g. release 4. The only thing I can do is deliver/merge from release 1 directly to release 2. The bad thing is that the pathname then becomes /main/release 1/release 2/latest for that files (and possibly later releases). That is I think not in line with the mainline approach. What am I doing wrong?

    Read the article

  • Resources for memory management in embedded application

    - by Elazar Leibovich
    How should I manage memory in my mission critical embedded application? I found some articles with google, but couldn't pinpoint a really useful practical guide. The DO-178b forbids dynamic memory allocations, but how will you manage the memory than? preallocate everything in advance and send a pointer to each function that needs allocation? Allocate it on the stack? Use a global static allocator (but then it's very similar to dynamic allocation)? Answers can be of the form of regular answer, reference to a resource, or reference to good opensource embedded system for example. clarification: The issue here is not whether or not memory management is availible for the embedded system. But what is a good design for an embedded system, to maximize reliability.

    Read the article

  • Sql Server Management Studio 2008 - Insert string with more than two lines

    - by gsharp
    I want to paste a string with more than two lines into a nvarchar(max) cell (right click a Table in Sql Server Management Studio 2008 -- Edit Rows). Unfortunately only the the first line of the string is pasted into the cell. I know, I could write a Insert/Update script for that, but it's not what I'm looking for. I need a quick way to paste some text into some cell. Is there a way to achieve my goal? Thanks in advance.

    Read the article

  • user access management in j2ee web application

    - by kawtousse
    Hi everyone, I am working with jsp/servlet project and i have to complete the module of access management to my jsps since I have more than one user with different profile. I defined a table in my database wich resume the profil and the url permitted like that: id_profil :1 url : http://localhost/...xyz.jsp id page 1 Now I am trying to let the menu modified appropriately to the id_profil of the logged user. So there are pages allowed in one profile but must be hidden to others. I have no idea since now how to realize this and it is so important for me. thanks for your help.

    Read the article

  • content management system

    - by Farax
    I am building a website for a client who needs a content management system to go with it. The client requires features such as content staging, approving process before publishing a page, provision for templates (changing which changes the lay out for the whole website), entry and expiry dates for pages and content search. I am planning to use an existing opensource CMS for the work but I am confused as to which one should it be. I need help in deciding whether this approach is good or should i develop my own CMS? and if I do use an opensource one, which one is extensible and customisable enough using .NET?

    Read the article

  • Project management software

    - by fahadsadah
    Hello I am looking for a decent web application that performs project management, and am hoping you guys can help me out. Requirements: Free, open source software. Runs from a Linux server (no ASP.NET). Git integration. GitHub integration is a plus. Tracks bugs and feature requests. Version tracking/scheduling, ie being able to say that a feature will be implemented for version X. I was looking at Redmine, but I don't know about the last item. Is there a plugin for that, perhaps?

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • Partner Blog Series: PwC Perspectives - "Is It Time for an Upgrade?"

    - by Tanu Sood
    Is your organization debating their next step with regard to Identity Management? While all the stakeholders are well aware that the one-size-fits-all doesn’t apply to identity management, just as true is the fact that no two identity management implementations are alike. Oracle’s recent release of Identity Governance Suite 11g Release 2 has innovative features such as a customizable user interface, shopping cart style request catalog and more. However, only a close look at the use cases can help you determine if and when an upgrade to the latest R2 release makes sense for your organization. This post will describe a few of the situations that PwC has helped our clients work through. “Should I be considering an upgrade?” If your organization has an existing identity management implementation, the questions below are a good start to assessing your current solution to see if you need to begin planning for an upgrade: Does the current solution scale and meet your projected identity management needs? Does the current solution have a customer-friendly user interface? Are you completely meeting your compliance objectives? Are you still using spreadsheets? Does the current solution have the features you need? Is your total cost of ownership in line with well-performing similar sized companies in your industry? Can your organization support your existing Identity solution? Is your current product based solution well positioned to support your organization's tactical and strategic direction? Existing Oracle IDM Customers: Several existing Oracle clients are looking to move to R2 in 2013. If your organization is on Sun Identity Manager (SIM) or Oracle Identity Manager (OIM) and if your current assessment suggests that you need to upgrade, you should strongly consider OIM 11gR2. Oracle provides upgrade paths to Oracle Identity Manager 11gR2 from SIM 7.x / 8.x as well as Oracle Identity Manager 10g / 11gR1. The following are some of the considerations for migration: Check the end of product support (for Sun or legacy OIM) schedule There are several new features available in R2 (including common Helpdesk scenarios, profiling of disconnected applications, increased scalability, custom connectors, browser-based UI configurations, portability of configurations during future upgrades, etc) Cost of ownership (for SIM customers)\ Customizations that need to be maintained during the upgrade Time/Cost to migrate now vs. waiting for next version If you are already on an older version of Oracle Identity Manager and actively maintaining your support contract with Oracle, you might be eligible for a free upgrade to OIM 11gR2. Check with your Oracle sales rep for more details. Existing IDM infrastructure in place: In the past year and half, we have seen a surge in IDM upgrades from non-Oracle infrastructure to Oracle. If your organization is looking to improve the end-user experience related to identity management functions, the shopping cart style access request model and browser based personalization features may come in handy. Additionally, organizations that have a large number of applications that include ecommerce, LDAP stores, databases, UNIX systems, mainframes as well as a high frequency of user identity changes and access requests will value the high scalability of the OIM reconciliation and provisioning engine. Furthermore, we have seen our clients like OIM's out of the box (OOB) support for multiple authoritative sources. For organizations looking to integrate applications that do not have an exposed API, the Generic Technology Connector framework supported by OIM will be helpful in quickly generating custom connector using OOB wizard. Similarly, organizations in need of not only flexible on-boarding of disconnected applications but also strict access management to these applications using approval flows will find the flexible disconnected application profiling feature an extremely useful tool that provides a high degree of time savings. Organizations looking to develop custom connectors for home grown or industry specific applications will likewise find that the Identity Connector Framework support in OIM allows them to build and test a custom connector independently before integrating it with OIM. Lastly, most of our clients considering an upgrade to OIM 11gR2 have also expressed interest in the browser based configuration feature that allows an administrator to quickly customize the user interface without adding any custom code. Better yet, code customizations, if any, made to the product are portable across the future upgrades which, is viewed as a big time and money saver by most of our clients. Below are some upgrade methodologies we adopt based on client priorities and the scale of implementation. For illustration purposes, we have assumed that the client is currently on Oracle Waveset (formerly Sun Identity Manager).   Integrated Deployment: The integrated deployment is typically where a client wants to split the implementation to where their current IDM is continuing to handle the front end workflows and OIM takes over the back office operations incrementally. Once all the back office operations are moved completely to OIM, the front end workflows are migrated to OIM. Parallel Deployment: This deployment is typically done where there can be a distinct line drawn between which functionality the platforms are supporting. For example the current IDM implementation is handling the password reset functionality while OIM takes over the access provisioning and RBAC functions. Cutover Deployment: A cutover deployment is typically recommended where a client has smaller less complex implementations and it makes sense to leverage the migration tools to move them over immediately. What does this mean for YOU? There are many variables to consider when making upgrade decisions. For most customers, there is no ‘easy’ button. Organizations looking to upgrade or considering a new vendor should start by doing a mapping of their requirements with product features. The recommended approach is to take stock of both the short term and long term objectives, understand product features, future roadmap, maturity and level of commitment from the R&D and build the implementation plan accordingly. As we said, in the beginning, there is no one-size-fits-all with Identity Management. So, arm yourself with the knowledge, engage in industry discussions, bring in business stakeholders and start building your implementation roadmap. In the next post we will discuss the best practices on R2 implementations. We will be covering the Do's and Don't's and share our thoughts on making implementations successful. Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Webmin / Virtualmin running php as www-data, is locked out of viewing .htaccess and writing

    - by Kirill
    I've asked this on the virtualmin forums, but haven't had any help from there. Recently, "something" happened and it seems that the apache service has gone a bit weird. What it does: it runs all apache traffic as www-data and sometimes spawns the php5-cgi process as www-data, this is a problem because all the domain users own their directories and default permissions don't let www-data write to these folders (file uploads are dead) or read .htaccess (permalinks are broken in wordpress). I've googled this for about a week straight now, tried pretty much everything I could find and achieved nothing. The only thing that I think might actually be the cause of all this is this page: http:// - i.imgur.com/NYW3x.png (got shut down by the spam filter) So I figured if I set it to "default", this might magically start working again, but all it does is "crash" apache (all websites timeout). I figure it's something to do with the "mpm" module or something, but I can't find anything relevant in the settings to modify for it to work. Can someone please point me in the right direction? System info: Webmin version 1.580 Kernel and CPU Linux 2.6.35.4-rscloud on x86_64 Virtualmin version 3.90.gpl GPL Ubuntu 10.04 LTS (Lucid) A couple screenshots of top http://i.imgur.com/U2DTK.png http://i.imgur.com/sNPKs.png

    Read the article

  • Hard drive failed, suspected filesystem corruption, still cannot salvage any data from harddrive

    - by Hippy-Head
    Firstly, I am terribly sorry if this is a duplicate, but I couldn't find a similar issue to mine, so here goes. I have a 1TB hdd bought around 8 months ago used as backup hard drive. I have not used the drive for a period of time whatsoever, and when I was trying to get back to some files on it, it was completely wiped just like that. At first it would not boot I tried everything from command line chkdsk and filesystem recovery software to rebuilt it. After a few attempts I managed to initialize it, at that time it was an achievement. The problems started when I tried to recover the data inside, I have used A LOT of software free and commercial software on both Mac and Windows, with the help of cmd or Terminal commands, however no data of any kind was recovered, even after leaving it thoroughly scan for around 9-10 hours all night sometimes longer, with no results at all. I am somewhat desperate, I am usually good at retrieving data from corrupt hard drives, but this is not the case. Call me paranoid, but I do not want to give it to someone to fix it for me, as I have a lot of photos and personal stuff that I do not want anyone to see.

    Read the article

  • Cross OS data recover question, USB drive involved.

    - by Moshe
    Here's the story: A MacBook had OS X 10.4 and Windows XP dual booting using rEFIt. Then the Windows partition gets corrupted and it won't boot. Presumably a virus. There were sensitive files there and those were successfully copied to a USB drive and then 10.5 was installed on the hard drive, formatting the drive in the process. The USB drive's contacts cracked and he data is lost from there, unless it can be resoldered. The issues is that there is too much solder there already. So, how can the data in question be recovered? The files were Microsoft Money (not the latest version) files for the Windows version of the program. Right now, only OS X is installed on the MacBook. Is there Mac based program that can recover the Windows data or am I better off trying to resolder the drive? Does anyone know how to best resolder a USB drive more than once, where the first solder is ther, but detached from the silicon? Also, what format (extension) are Microsoft Money files? In need of help!

    Read the article

  • Recover NTFS data from a ZFS pool that was exposed as an iSCSI target

    - by David
    This was me being stupid and the data is by no means critical and is now a learning experience first, time saver second. I set up a 100GB iSCSI target via the bare bone instructions in napp-it. It's a volume LU. I then had my Windows 7 machine connect to the iSCSI target, formatted it to NTFS, and tested the performance of it with some large iso file transfers. I then unmapped the drive, reconnected to the target, and was forced to format to NTFS again. It was then I realized the files I had transferred only existed on the iSCSI target. I threw a little fit and then went about my business. When I was cleaning up my experiment I noticed in this screen: http://imgur.com/1xlcu.jpg That is my experimental target tank/iSCSI and it still has a lot of data in it. Assuming my isos are still in this pool how would I go about recovering them? While writing this I used GetDataBackup for NTFS from www.runtime.org. And while it found two previous NTFS partitions there was no data.

    Read the article

  • Share Firefox/Thnderbird data between W7 and Linux Mint 12 in dual boot computer

    - by Albert
    I've just set up my laptop (where I had running only W7) with a dual boot to run Linux Mint 12 as well. I have a "Data" partition (apart from the required partitions for W7 and Linux) where I store pretty much everything that isn't software installations (music, videos, project files, etc). I seem to be able to access that NTFS partition totally fine from Mint (like I've always done with W7), which is cool because I can access all that stuff regardless of which OS I'm using. I would like to know if it's possible (and how) to go one step further and share programs data between the two OS. One example would be my Firefox and Thunderbird data. For example, in Firefox share my bookmarks (and if I could share history, autocomplete and all that stuff, that would be awesome). In thunderbird, be able to share my mail and configuration, seeing the same inbox, folders, message rules, etc... So if I receive/send an email from W7 and later switch to Mint, I can see that email as it had been received/sent from Mint, and vice versa. Is this even possible? Or am I asking for too much convenience? If it's possible, any clues on how to set it all up?

    Read the article

  • Recover data from Dynamic Disk (MBR) bigger than 2TB

    - by Helder
    Here is the situation: Promise Array FastTrak TX4310 with 3 disks (750 GB each) in RAID5. This comes to around 1500 GB of data. Last week I had the idea of expanding the RAID with an additional 750 GB disk. This would bring the volume to around 2250 GB. I plugged the disk and used the Webpam software to do the RAID expansion. However, I didn't count with the MBR 2TB limit, as I didn't remembered that the disk was using MBR instead of GPT and I didn't check it prior to the expansion. After a couple of days of expansion, today when I got home, the disk in Windows disk manager showed the message "Invalid disk" and when I try to activate it, it says "The operation is not allowed on the Invalid pack". From what I figured, the logical volume on the RAID expanded, and passed that info to the Windows layer and I ended up with an "larger than 2TB" MBR disk. I'm hopping that somehow I can still recover some data from this, and I was wondering if I can "rewrite" the MBR structure back to the 1500 GB partition size, so I can access the partition in Windows. Right now I'm doing an "Analyse" with TestDisk, as I hope the program will pickup the old 1500 structure and allow me to somehow revert back to it. I think that even though the Logical Drive in the RAID is bigger than the 2TB, I can somehow correct the MBR to show the 1500 GB partition again. I had a similar problem once, and I was able to recover the data using a similar method. What do you guys think? Is it a dead end? Am I totally screwed because there is the extra RAID layer that I'm not counting? Or is there other way to move with this? Thanks all!

    Read the article

  • Subset a data.frame by list and apply function on each part, by rows

    - by aL3xa
    This may seem as a typical plyr problem, but I have something different in mind. Here's the function that I want to optimize (skip the for loop). # dummy data set.seed(1985) lst <- list(a=1:10, b=11:15, c=16:20) m <- matrix(round(runif(200, 1, 7)), 10) m <- as.data.frame(m) dfsub <- function(dt, lst, fun) { # check whether dt is `data.frame` stopifnot (is.data.frame(dt)) # check if vectors in lst are "whole" / integer # vector elements should be column indexes is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol # fall if any non-integers in list idx <- rapply(lst, is.wholenumber) stopifnot(idx) # check for list length stopifnot(ncol(dt) == length(idx)) # subset the data subs <- list() for (i in 1:length(lst)) { # apply function on each part, by row subs[[i]] <- apply(dt[ , lst[[i]]], 1, fun) } # preserve names names(subs) <- names(lst) # convert to data.frame subs <- as.data.frame(subs) # guess what =) return(subs) } And now a short demonstration... actually, I'm about to explain what I primarily intended to do. I wanted to subset a data.frame by vectors gathered in list object. Since this is a part of code from a function that accompanies data manipulation in psychological research, you can consider m as a results from personality questionnaire (10 subjects, 20 vars). Vectors in list hold column indexes that define questionnaire subscales (e.g. personality traits). Each subscale is defined by several items (columns in data.frame). If we presuppose that the score on each subscale is nothing more than sum (or some other function) of row values (results on that part of questionnaire for each subject), you could run: > dfsub(m, lst, sum) a b c 1 46 20 24 2 41 24 21 3 41 13 12 4 37 14 18 5 57 18 25 6 27 18 18 7 28 17 20 8 31 18 23 9 38 14 15 10 41 14 22 I took a glance at this function and I must admit that this little loop isn't spoiling the code at all... BUT, if there's an easier/efficient way of doing this, please, let me know!

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >