Search Results

Search found 59273 results on 2371 pages for 'data protection'.

Page 26/2371 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Fast Data Executive Round Table FY14 event kit

    - by JuergenKress
    We are very interested to run joint marketing events jointly with you as our partners! At our SOA Community Workspace (SOA Community membership required) you can find a new Fast Data Executive Round Table FY14 event kit. This event is designed at senior IT and executives level for the purposes of education, awareness, and thought leadership around the subject of big data; and a specific flavor of big data - Fast Data - that has begun to spark the imagination of many Oracle customers. Fast Data is not new. It’s a term that was invented initially by Ovum’s Tony Baer as a way to represent the collection of ‘high velocity’ solutions with respect to the big data. For Oracle, the Fast Data campaign in FY13 began as a way to tie a broader set of solutions together (SOA/Business Process Management, Data Integration and Business Analytics) under a set of use cases focused on real-time, high velocity data. It has helped to give Oracle a leap-frog advantage over many of the niche integration vendors (i.e. Informatica, Pega, Tibco, Software AG, Terracotta) who haven’t been able to address these types of end-to-end use cases which rely on the combination of filtering, in-memory data processing, correlation, real-time data movement and transformation, end-to-end analytics, and business process management. Only Oracle can address all the dimensions of fast data, and only Oracle can provide a set of engineered solutions to address this space. This event is designed to continue that thought leadership momentum and raise the awareness about what Oracle Fast Data solutions are designed to solve. It’s designed to highlight real customer solutions and articulate the business benefits that fast data can address. This is not an event that gets into the esoteric technical standards of Hadoop, NoSQL, and in-memory data grids. This is an event that instead gets into the heart of business problems that big data has left un-addressed and charts the path for next steps in fast data. Get the Fast Data Executive Round Table FY14 event kit here. Support marketing campaigns We can support such events by: Oracle speakers - contact your partner manager Marketing budget - contact your A&C marketing manager Event location - free use of Oracle Customer Visitor Centers conference rooms Promote your event at events.oracle.com: http://tinyurl.com/eventspecialized E-Blast: invite customers to your event – contact your A&C marketing manager For additional marketing kits e.g for Business Process Managementplease visit our SOA Community Workspace. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags:

    Read the article

  • Reiserfs partition data recovery

    - by user991554
    i am having windows xp as my os. but i have raiserfs partions created by suse linux on same HDD. i need to recover data from the linux partitions now. i created opensuse live usb and booted from it. but it is showing free space in disk manager instead of linux partitions. but i accessed from one windows application(e.g Ext2Read ) which is showing linux partitions but not able to recover data from them as they are demo applications. why opensuse live usb os showing free space instead of linux partitions. Any other metyhod to recover data from them?

    Read the article

  • WCF data services (OData), query with inheritance limitation?

    - by Mathieu Hétu
    Project: WCF Data service using internally EF4 CTP5 Code-First approach. I configured entities with inheritance (TPH). See previous question on this topic: Previous question about multiple entities- same table The mapping works well, and unit test over EF4 confirms that queries runs smoothly. My entities looks like this: ContactBase (abstract) Customer (inherits from ContactBase), this entity has also several Navigation properties toward other entities Resource (inherits from ContactBase) I have configured a discriminator, so both Customer and Resource map to the same table. Again, everythings works fine on the Ef4 point of view (unit tests all greens!) However, when exposing this DBContext over WCF Data services, I get: - CustomerBases sets exposed (Customers and Resources sets seems hidden, is it by design?) - When I query over Odata on Customers, I get this error: Navigation Properties are not supported on derived entity types. Entity Set 'ContactBases' has a instance of type 'CodeFirstNamespace.Customer', which is an derived entity type and has navigation properties. Please remove all the navigation properties from type 'CodeFirstNamespace.Customer'. Stacktrace: at System.Data.Services.Serializers.SyndicationSerializer.WriteObjectProperties(IExpandedResult expanded, Object customObject, ResourceType resourceType, Uri absoluteUri, String relativeUri, SyndicationItem item, DictionaryContent content, EpmSourcePathSegment currentSourceRoot) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, ResourceType expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__b.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) Seems like a limitation of WCF Data services... is it? Not much documentation can be found on the web about WCF Data services (OData) and inheritance specifications. How can I overpass this exception? I need these navigation properties on derived entities, and inheritance seems the only way to provide mapping of 2 entites on the same table with Ef4 CTP5... Any thoughts?

    Read the article

  • Data transfer speed to USB storage connected to wifi router very slow

    - by RonakG
    Here is my setup. A Linksys Cisco E3200 wifi router. A MacbookPro running OS X Lion 10.7.4. A Seagate GoFlex 1TB hard drive connected to wifi router via the USB port. When I try to transfer data from my MBP to the HDD, the data transfer rate is very low. I'm getting around 3MB/s write speed. This is very slow compared to the speed I get when HDD is directly connected to the MBP. The HDD is NTFS formatted. And the router provides access to HDD using Samba share. So I connect to the HDD using smb://. What is the limiting factor here affecting the data transfer rate?

    Read the article

  • Data recovery after user profile corruption on Windows XP

    - by m68z8mi
    I'm away from home for college, and my computer back home has been having some issues. My dad took it to a computer store, and apparently the user profiles somehow got corrupted, so they're locked out of the computer. This is a Windows XP box, but I changed the default administrator account password, so that backdoor isn't a possibility. Now, that computer's HDD has a whole bunch of data on it which my dad would hate to lose, so I suggested that they take the HDD out, plug it into some other computer, and just copy all the data off that way (keeping in mind that the data itself wasn't encrypted). However, the computer store people said that wouldn't be possible unless they had the administrator account password (which I can't remember for the life of me), and that they'd either have to reformat and reinstall Windows, or else use some complicated sounding recovery process costing a decent amount of money. That sounds like complete BS to me, but I'm not 100% sure about it, so I thought I'd get some more opinions. Could someone more knowledgeable about this stuff suggest a good course of action to take?

    Read the article

  • Serial numbers generation without user data

    - by Sphynx
    This is a followup to this question. The accepted answer is generally sufficient, but requires user to supply personal information (e.g. name) for generating the key. I'm wondering if it's possible to generate different keys based on a common seed, in a way that program would be able to validate if those keys belong to particular product, but without making this process obvious to the end user. I mean it could be a hash of product ID plus some random sequence of characters, but that would allow user to guess potential new keys. There should be some sort of algorithm difficult to guess.

    Read the article

  • HAProxy overload protection

    - by user2050516
    using the HAProxy, would it be possible to configure an overload protection, to limit the amount of requests sent to the backing http server(s) to a given rate (z.B 100 Request per second ). If the threshold is exceeded requests should be answered with a default response. I am interested in requests per second not connections per second as a connection can have many requests. And yes to improve the servers is not an option here. If yes a configuration example to achieve that would be excellent. Thank you in advance.

    Read the article

  • What do you call the concept of dynamic data definition?

    - by DJTripleThreat
    Maybe this is simpler and more straightforward then what I'm thinking but I can't seem to find this concept on google anywhere. The concept is this: You have a table in a database and the table has a specified number of columns. However, it has been asked of me by previous clients that there also be a set of dynamic user defined columns that can be added on the fly. What is this concept called and is it considered a design pattern?

    Read the article

  • Le W3C publie la proposition du standard "Do Not Track", et pose les fondements de la protection anti-traçage publicitaire

    Le W3C publie la proposition du standard "Do Not Track" Et pose dans deux brouillons les fondements de la protection anti-traçage publicitaire Mise à jour du 15 novembre 2011 par Idelways Les premiers objectifs sont atteints par le groupe de travail « Tracking Protection » du W3C, dédié à la standardisation d'une solution de protection antitraçage publicitaire. Créé à l'initiative « Do Not Track » de Microsoft et Mozilla, le groupe du consortium mondial vient de publier deux brouillons de spécifications que les éditeurs de navigateurs et créateurs de sites devront implémenter à terme pour rendre l'utili...

    Read the article

  • Recomposing data structure in Excel

    - by Velletti
    I've got a sheet of 35k rows of the following kind of data that I want to reshape into table below. So, I want to reshape this data in a way to get all the people within a specific GroupID in separate columns. I suppose that I should add a counter for each row within specific group id? Also, I suppose these kind of issues are a lot more comfortable to be done in databases? Since I often have this kind of data, I need be much quicker about solving it, then I am now.

    Read the article

  • Overload Protection

    - by Tyron
    Is there a simple way how I could redirect a visitor (via .htaccess or PHP script) to a static page when the server is overloaded from too many requests? It doesn't have to be a protection against huge amounts of requests at once or protect against DoS Attacks. I think our server would be protected enough if we could prevent the standard website to be shown and instead show a single file "overloaded.html". Also how could I get a measure for a server being overloaded on a typical managed server (= non root access to a Linux server) environment?

    Read the article

  • Can anybody recommend a good data recovery strategy?

    - by Jurassic_C
    So lets say you've failed at preventing a drive failure, and also, you've failed to make a backup of said drive. Push has come to shove and now you need a way to recover you're precious data. Has anybody out there run into this situation? And if so could you please provide any suggestions on how to recover the data based on your experiences? For example have you used any data recovery services that you could either recommend, or that you would definitely avoid if you had a do-over? Thanks in advance

    Read the article

  • Promote document data to meta-data

    - by antony.trupe
    Is there a way to "promote" information in a document(Word, Visio, etc) to "meta-data" that can be auto-magically represented in a SharePoint list? I want to be able to create metrics on information in documents without duplicating the data in the document in a column of the list.

    Read the article

  • XY Diagram/Data Browser for mid-sized CSV files

    - by Johannes Rudolph
    I have a set of CSV files with about a 100k records in them. The records need to be visualized in an x-y diagram. Because of the huge amount of data, Excel is not gonna cut it. Specifically, I'm looking for: Seamless zooming in and out of the data Navigation on both axis A "trace mode" where I can trace the line with the cursor and the value under the cursor is displayed as text. Does anyone know a tool capable of this?

    Read the article

  • Data recovery on an Iomega portable drive.

    - by Kaji
    For Christmas, my little brother got an Iomega 500GB portable hard drive. It'd been working well, but last week it flat died, and the company's trying to shirk it, claiming it's not under warranty and saying it'll cost at least $900 to recover the data from the drive. He's still trying to fight the warranty thing, but wants to know, should it boil down to it, what other options exist for recovering the data from the drive. (in before "BACK UP!")

    Read the article

  • Data Mining Software

    - by Mark
    I want to harvest some data like this http://www.newcardealers.ca/en/Dealers/List-A.aspx And insert the name, address, phone number, email, etc. into a database. Is there some software I can use that will take a webpage, let me specify some regexes or something, and then spit out all the matched data in a CSV or some format easily insertable into a DB?

    Read the article

  • DPM - Monitoring is green, Protection has error and Latest rec point is old. How do I interpret that?

    - by LosManos
    How do I read the DPM info in this case? Monitoring says Failed but Protection shows Ok while having a Latest recovery point from last year. Under Monitoring tab I have Failed for Source | Computer | Protection group | Start time Computer\System Protection | MyServerName | Recovery point | 2014-06-09 19:00:00 which shows me that something happened last night. But under Protection tab everything is green. Here I have Protection group member | | Protection status Protection group ..name.. Computer: MyServerName Computer\System protection Bare metal recovery OK ... Latest recovery point: 2013-12-12 06:32:54 My guess is that backup failed last night once, but succeeded later. It then found out that there hasn't been any change since sometime last year and leave it be and flags Ok.

    Read the article

  • according root permission to www-data

    - by user2478348
    i have a perl script dhcpmanip.pl which contain this line: system "hostapd /etc/hostapd-1.0/hostapd/hostapd.conf " it's a command to start hostapd!and i get this error : Insecure $ENV{PATH} while running setuid at /var/www/cgi-bin/dhcpmanip.pl line 46 After searching on the net i realised that i should accord root permission to www-data user (apache user) then i tried to modify the file /etc/sudoers by inserting this line : www-data ALL=NOPASSWD: /var/www/cgi-bin/dhcpmanip.pl but it still not working...does anyone have any idea about how solving this problem??thx alot

    Read the article

  • Recovered folders from Camera show as JPEG files and can can't be viewed.

    - by user642111
    I have recovered a CCTV camera hard disk after a crash and have managed to get most of the data using EasyRecovery Pro. The problem is now that all the data that I have recovered appear like File09.JPG with and image icon in windows XP, but the files can't be viewed in any JPEG viewer software. I suspect that the .JPG files are indeed folders, but I can't force windows XP the change the file type. Very Odd. Any help is appreciated. Thanks, Hoo

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Collaborative Design via Excel?

    - by thatjeffsmith
    As you may have heard last week, we have a new version of Oracle SQL Developer Data Modeler now available as an Early Adopter release. Version 3.3 has quite a few new features and I’ll be previewing them here. Today’s topic is our new Excel integration. It builds off of last week’s lesson: Search, so you may want to go read that first. They say it takes a village to raise a child. I say it takes a team to build a data model. You have your techie folks, your business folks, your in-betweeners, and your database geeks. Who gets to define how customers are represented and stored in your database? That data lives forever, so you better get it right from the beginning, or you’ll be living in a hacker’s paradise for years to come. Lots of good rantings, ravings, and advice on this topic in general on Karen Lopez’s (@datachick) blog. But let’s say you are the primary modeler on a project. You dutifully interview the business folks for their requirements. You sit down and start to model and think you’re pretty close. Now you need someone to confirm your assumptions and provide some feedback. Do you send your model over? Take a screenshot and blow it up on a whiteboard? Export to HTML and let them take a magic marker to their monitors? Or maybe you bite the bullet and install your modeling software on their desktops and take the hours or days required to train them up on how to use the the tool. Wouldn’t it be nice if they could just mark up their corrections in Excel and let you suck the updates back in? This is what we have started to build in Oracle SQL Developer Data Modeler. Let’s say you have a new table called ‘UT_STARTUPS.’ It looks a little something like this: A table in Oracle SQL Developer Data Modeler What I would like to do is have my team or co-worker review how I have defined those columns. Perhaps TIMESTAMP is overkill or maybe the column names themselves aren’t up to snuff. What I am going to do is now search for all the columns in my table, then export that to Excel. So do a search for UT_STARTUPS. Search, filter, then Report With the filter set to ‘Columns,’ if I do a report I’ll be only getting the columns that are resolving to my search term. So as long as my table name is unique in the model, I should get what I’m looking for. Here’s what I see when I click on the Report button: XLS or XLSX, either format is just fine I want to decide how the Column data is exported to Excel though, so I’m going to create a report template that I can use going forward. So click the ‘Manage’ button and setup a new template. I’m going to call mine ‘CollaborativeDevelopment.’ The templates allow me to define what properties are included in the reports. Once this is set, I’ll have the XLS file generated, and get to work Now let the Excel junkies do their stuff Note that not ALL of the report properties are update-able (yes, I made up a new word there) via Excel. We’ll have the full list of properties documented going forward, but in my Excel sheet, note that I can’t change the table name or the data types for the columns. I’m going to update some column names and supply ‘nice’ comments so the database users know what’s what. Here’s my input for the designer/architect/database dude: Be kind, please rew…use comments. Save the file, email it back to your modeler. Update the model from Excel That’s right, it’s a right mouse click from your model in the tree If everything goes right, you’ll see a nice confirmation message: It’s alive! Another to-do item on tap – making this dialog more informative. We’ll be showing exactly what in your model was updated from Excel. Let’s take another look at the model now Voila! Why are we doing this again? The goal is to reduce the number of round-trips from the modeler and the business process owner. One is used to working with Excel – why not allow them to mark up their changes in the tool they already know? This is an early adopter release and I anticipate this feature getting a good bit of tuning up before we release. Why don’t you download 3.3, give it a whirl, and let us know what you think?

    Read the article

  • Webmin / Virtualmin running php as www-data, is locked out of viewing .htaccess and writing

    - by Kirill
    I've asked this on the virtualmin forums, but haven't had any help from there. Recently, "something" happened and it seems that the apache service has gone a bit weird. What it does: it runs all apache traffic as www-data and sometimes spawns the php5-cgi process as www-data, this is a problem because all the domain users own their directories and default permissions don't let www-data write to these folders (file uploads are dead) or read .htaccess (permalinks are broken in wordpress). I've googled this for about a week straight now, tried pretty much everything I could find and achieved nothing. The only thing that I think might actually be the cause of all this is this page: http:// - i.imgur.com/NYW3x.png (got shut down by the spam filter) So I figured if I set it to "default", this might magically start working again, but all it does is "crash" apache (all websites timeout). I figure it's something to do with the "mpm" module or something, but I can't find anything relevant in the settings to modify for it to work. Can someone please point me in the right direction? System info: Webmin version 1.580 Kernel and CPU Linux 2.6.35.4-rscloud on x86_64 Virtualmin version 3.90.gpl GPL Ubuntu 10.04 LTS (Lucid) A couple screenshots of top http://i.imgur.com/U2DTK.png http://i.imgur.com/sNPKs.png

    Read the article

  • Should I upgrade to Symantec Endpoint Protection? [closed]

    - by Alex C.
    I'm the IT manager at an animal shelter in Upstate New York. We have a Windows network with about 50 desktops running Windows XP Pro. We used to use CA eTrust Antivirus, but that product didn't work too well (too many infections got through). About six months ago, we switched to using Symantec Antivirus Corporate Edition ver. 10.1.8.8000. If anything, the Symantec product is even worse. The last six weeks in particular have been very bad -- we've had about seven or eight PCs get hit with those malware infections that masquerade as antivirus software. In most of those cases, Symantec didn't even flag the malware at all. So... what gives with the Symantec Antivirus? As far as I can tell, it's installed correctly and downloading updated definitions nightly. I can upgrade to Symantec Endpoint Protection for $220 (we get non-profit pricing), but I don't want to do it if it's not going to be significantly better. Any advice? Should I switch to something else entirely? Thanks!

    Read the article

  • Hard drive failed, suspected filesystem corruption, still cannot salvage any data from harddrive

    - by Hippy-Head
    Firstly, I am terribly sorry if this is a duplicate, but I couldn't find a similar issue to mine, so here goes. I have a 1TB hdd bought around 8 months ago used as backup hard drive. I have not used the drive for a period of time whatsoever, and when I was trying to get back to some files on it, it was completely wiped just like that. At first it would not boot I tried everything from command line chkdsk and filesystem recovery software to rebuilt it. After a few attempts I managed to initialize it, at that time it was an achievement. The problems started when I tried to recover the data inside, I have used A LOT of software free and commercial software on both Mac and Windows, with the help of cmd or Terminal commands, however no data of any kind was recovered, even after leaving it thoroughly scan for around 9-10 hours all night sometimes longer, with no results at all. I am somewhat desperate, I am usually good at retrieving data from corrupt hard drives, but this is not the case. Call me paranoid, but I do not want to give it to someone to fix it for me, as I have a lot of photos and personal stuff that I do not want anyone to see.

    Read the article

  • Disable write-protection on Micro SD

    - by Tim
    My task today is to open up and copy some files to 700 brand new micro SD cards. As I get going on this task I am finding that some of the Micro SD cards are telling me "sorry this drive is write protected" To copy the files I am using a standard SD to micro SD card adapter, and a USB SD card reader / writer. I have ensured that the switch is set to OFF on all of my adapters. As soon as I get a Micro SD that tells me it is write protected I can use the same adapter with another micro SD and it works fine, so I know the problem is not with my adapters. My question is: How can I disable the write protection on a Micro SD card? This eHow article seems to indicate that there is also a physical switch on Micro SD cards. However I have personally never seen a Micro SD with a physical switch, and none of the ones I am using today have said switch. Since these cards are brand new and thus empty are the ones that are telling me they are write protected simply useless? Could this be caused by some sort of defect in the cards?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >