Search Results

Search found 2844 results on 114 pages for 'iterative conversion'.

Page 92/114 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Webcast Q&A: ING on How to Scale Role Management and Compliance

    - by Tanu Sood
    Thanks to all who attended the live webcast we hosted on ING: Scaling Role Management and Access Certifications to Thousands of Applications on Wed, April 11th. Those of you who couldn’t join us, the webcast replay is now available. Many thanks to our guest speaker, Mark Robison, Enterprise Architect at ING for walking us through ING’s drivers and rationale for the platform approach, the phased implementation strategy, results & metrics, roadmap and recommendations. We greatly appreciate the insight he shared with us all on the deployment synergies between Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) to enforce streamlined user and role management and scalable compliance. Mark was also kind enough to walk us through specific solutions features that helped ING manage the problem of role explosion and implement closed loop remediation. Our host speaker, Neil Gandhi, Principal Product Manager, Oracle rounded off the presentation by discussing common use cases and deployment scenarios we see organizations implement to automate user/identity administration and enforce closed-loop scalable compliance. Neil also called out the specific features in Oracle Identity Analytics 11gR1 that cater to expediting and streamlining compliance processes such as access certifications. While we tackled a few questions during the webcast, we have captured the responses to those that we weren’t able to get to here; our sincere thanks to Mark Robison for taking the time to respond to questions specific to ING’s implementation and strategy. Q. Did you include business friendly entitlment descriptions, or is the business seeing application descriptors A. We include very business friendly descriptions.  The OIA tool has the facility to allow this. Q. When doing attestation on job change, who is in the workflow to review and confirm that the employee should continue to have access? Is that a best practice?   A. The new and old manager  are in the workflow.  The tool can check for any Separation of Duties (SOD) violations with both having similiar accesses.  It may not be a best practice, but it is a reality of doing your old and new job for a transition period on a transfer. Q. What versions of OIM and OIA are being used at ING?   A. OIM 11gR1 and OIA 11gR1; the very latest versions available. Q. Are you using an entitlements / role catalog?   A. Yes. We use both roles and entitlements. Q. What specific unexpected benefits did the Identity Warehouse provide ING?   A. The most unanticipated was to help Legal Hold identify user ID's in the various applications.   Other benefits included providing a one stop shop for all aggregated ID information. Q. How fine grained are your application and entitlements? Did OIA, OIM support that level of granularity?   A. We have some very fine grained entitlements, but we role this up into approved Roles to allow for easier management.   For managing very fine grained entitlements, Oracle offers the Oracle Entitlement Server.  We currently do not own this software but are considering it. Q. Do you allow any individual access or is everything truly role based?   A. We are a hybrid environment with roles and individual positive and negative entitlements Q. Did you use an Agile methodology like scrum to deliver functionality during your project? A. We started with waterfall, but used an agile approach to provide benefits after the initial implementation Q. How did you handle rolling out the standard ID format to existing users? A. We just used the standard IDs for new users.  We have not taken on a project to address the existing nonstandard IDs. Q. To avoid role explosion, how do you deal with apps that require more than a couple of entitlement TYPES? For example, an app may have different levels of access and it may need to know the user's country/state to associate them with particular customers.   A. We focus on the functional user and craft the role around their daily job requirements.  The role captures the required application entitlements.  To keep role explosion down, we use role mining in OIA and also meet and interview the business.  It is an iterative process to get role consensus. Q. Great presentation! How many rounds of Certifications has ING performed so far?  A. Around 7 quarters and constant certifications on transfer. Q. Did you have executive support from the top down   A. Yes  The executive support was key to our success. Q. For your cloud instance are you using OIA or OIM as SaaS?  A. No.  We are just provisioning and deprovisioning to various Cloud providers.  (Service Now is an example) Q. How do you ensure a role owner does not get more priviliges as are intended and thus violates another role, e,g, a DBA Roles should not get tor rigt to run somethings as root, as this would affect the root role? A. We have SOD  checks.  Also all Roles are initially approved by external audit and the role owners have to certify the roles and any changes Q. What is your ratio of employees to roles?   A. We are still in process going through our various lines of business, so I do not have a final ratio.  From what we have seen, the ratio varies greatly depending on the Line of Business and the diversity of Job Functions.  For standardized lines of business such as call centers, the ratio is very good where we can have a single role that covers many employees.  For specialized lines of business like treasury, it can be one or two people per role. Q. Is ING using Oracle On Demand service ?   A. No Q. Do you have to implement or migrate to OIM in order to get the Identity Warehouse, or can OIA provide the identity warehouse as well if you haven't reached OIM yet? A. No, OIM deployment is not required to implement OIA’s Identity Warehouse but as you heard during the webcast, there are tremendous deployment synergies in deploying both OIA and OIM together. Q. When is the Security Governor product coming out? A. Oracle Security Governor for Healthcare is available today. Hope you enjoyed the webcast and we look forward to having you join us for the next webcast in the Customers Talk: Identity as a Platform webcast series: Toyota: Putting Customers First – Identity Platform as a Business Enabler Wednesday, May 16th at 10 am PST/ 1 pm EST Register Today You can also register for a live event at a city near you where Aberdeen’s Derek Brink will discuss the survey results from the recently published report “Analyzing Platform vs. Point Solution Approach in Identity”. And, you can do a quick (& free)  online assessment of your identity programs by benchmarking it against the 160 organizations surveyed  in the Aberdeen report, compliments of Oracle. Here’s the slide deck from our ING webcast: ING webcast platform View more presentations from OracleIDM

    Read the article

  • How do I install and use the cli53 tools on Windows?

    - by pavlos
    I'm trying to find the simplest way to import a large number of BIND zone files in to Route 53. I've had a quick look at the AWS CLI and AWS Tools for Windows PowerShell but they don't seem to include a zone file import option like the AWS Route53 GUI does. The cli53 utility on the other hand does, but is written in Python and appears to have a series of pre-requisites to get going which I'm having troubles working out for Windows. I can find plenty of examples of setting it up under Linux but only one reference to a PowerShell example here, but it doesn't explain how to install cli53 in the first place. The other option I'm exploring is to use the BIND to Amazon Route 53 Conversion Tool perl script to first convert the zone files to the Route53 CreateHostedZoneRequest XML format and then use the AWS New-R53HostedZone PowerShell cmdlet to import the zones. After the zones have been imported I'll be looking at running a script to validate what has been created in Route53 matches with the existing nameserver prior to updating each domains nameserver records - I was planning on whipping something up using the new PS4.0 Resolve-DnsName cmdlet, but let me know if you have any better suggestions. Any assistance would be greatly appreciated - thanks. (By the way, I had more reference links in my post but ServerFault won't allow me to post more than 2 links being a new member; and for this same reason I also can't comment on Vasili's example in the other linked thread)

    Read the article

  • alias gcc='gcc -fpermissive' or modifying ./configure script

    - by robo
    I am compiling quite big project from source. The compilation always ends with: error: invalid conversion from ‘const char*’ to ‘char*’ [-fpermissive] I have already compiled this project one year ago. So I know a solution to this. Actualy I found more solutions: Adding a typecast to appropriate line of cpp code (It went to endless number of changes in each file. So I found next solution.) Modifying a makefile to compile that file with -fpermissive option. (I had to modify a lot of lines in each makefile. So I find even better solution.) "g++" or "gcc" was stored in a variable so I added -fpermissive to these variables. This is the best solution I have. It is sufficient to add this option to each makefile once. Unfortunately this software has big number of subdirectories. So I need to modify more than 100 makefiles. It took me whole day one year ago. Is there a way how to do this faster. What about this? alias gcc='gcc -fpermissive' I am not familiar with aliases. But it should be easy to try this. Is the syntax correct? And is this one correct? alias g++='g++ -fpermissive' ? And do I need to export the alias somehow? Will the make program respect the alias? Should I maybe change ./configure script? Or the ./configure.in? Or other file?

    Read the article

  • Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

    - by Philip Durbin
    I need to migrate a number of virtual machines from VMware ESX to CentOS 6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my problem at https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm not sure if there's another RPM in base CentOS or EPEL (something standard) I should be using instead. Originally, I was getting this "No root device found in this operating system image" error when attemting to migrate a Window 2008 VM. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm virt-v2v: No root device found in this operating system image. . . . but I solved this with a simply yum install libguestfs-winsupport since the docs say: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. Next I got an error about missing drivers for Windows 2008. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/amd64/Win2008 . . . and I resolved this by grabbing an iso from Fedora at http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and building an RPM from it with this spec file: https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec Now, virt-v2v exits without error: [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: my-vm configured with virtio drivers. [root@kvm01b ~]# Now, my question is, rather that the virtio-win RPM from the spec file I wrote, is there some other more standard RPM in base CentOS or EPEL that will resolve the error above? Here's a bit more detail about my setup: [root@kvm01b ~]# cat /etc/redhat-release CentOS release 6.2 (Final) [root@kvm01b ~]# rpm -q virt-v2v virt-v2v-0.8.3-5.el6.x86_64 See also Bug 605334 – VirtIO driver for windows does not show specific OS: Windows 7, Windows 2003

    Read the article

  • iconv supports too few encoding

    - by schemacs
    iconv -l outputs too few encodings on CentOS 6.5: $ iconv -l 10646-1:1993, 10646-1:1993/UCS4, ANSI_X3.4-1968, ANSI_X3.4-1986, ANSI_X3.4, ASCII, CP367, CSASCII, CSUCS4, IBM367, ISO-10646, ISO-10646/UCS2, ISO-10646/UCS4, ISO-10646/UTF-8, ISO-10646/UTF8, ISO-IR-6, ISO-IR-193, ISO646-US, ISO_646.IRV:1991, OSF00010020, OSF00010100, OSF00010101, OSF00010102, OSF00010104, OSF00010105, OSF00010106, OSF05010001, UCS-2, UCS-2BE, UCS-2LE, UCS-4, UCS-4BE, UCS-4LE, UCS2, UCS4, UNICODEBIG, UNICODELITTLE, US-ASCII, US, UTF-8, UTF8, WCHAR_T But on my Ubuntu the list seems much longer, here is different: CentOS6.5: $ php -a php > echo iconv('utf8', 'gbk', 'abc'); PHP Notice: iconv(): Wrong charset, conversion from `utf8' to `gbk' is not allowed in php shell code on line 1 php > quit $ php -i|grep iconv iconv iconv support => enabled iconv implementation => glibc iconv library version => 2.12 iconv.input_encoding => ISO-8859-1 => ISO-8859-1 iconv.internal_encoding => ISO-8859-1 => ISO-8859-1 iconv.output_encoding => ISO-8859-1 => ISO-8859-1 Ubuntu 14.04: $ php -a Interactive mode enabled php > echo iconv('utf8', 'gbk', "abc\n"); abc php > quit $ php -i|grep iconv iconv iconv support => enabled iconv implementation => glibc iconv library version => 2.19 iconv.input_encoding => ISO-8859-1 => ISO-8859-1 iconv.internal_encoding => ISO-8859-1 => ISO-8859-1 iconv.output_encoding => ISO-8859-1 => ISO-8859-1 But I don't want to recompile glibc(this will be huge mount of work), any idea on how to ad new encoding support?

    Read the article

  • Working with barcode fonts in Word

    - by Bob Rivers
    I need to create labels in Microsoft Word 2010 with numbers encoded as barcodes. The barcode's format (ean, code39, upc, etc) does not matter. I have downloaded a barcode conversion font that I found at this site. When I type the number that I want and then I format it with my new font, it produces a barcode. I then print it on an OKI laser printer (1200 dpi). The result seems to be fine, at least for common people. But, when I try to scan it, nothing happens. I tried both with a barcode scanner and a data collector, but neither of them read the barcode. My barcode scanner is working fine, because I can read commercial barcodes printed on products. Does anybody have any advice? How do I do this kind of stuff? I want to do it using Word because I will generate labels using Mail Merge. Therefore using external programs aren't option for me.

    Read the article

  • iOrgSoft Video Converter for Mac

    - by terryhao
    [url=http://www.iorgsoft.com/Video-Converter-for-Mac/]video converter for mac[/url] IOrgSoft[url=http://www.iorgsoft.com/Video-Converter-for-Mac/]video converter for mac[/url] is an excellent video converting and editing software for Macintosh users. A built-in powerful video player, trimming, splitter/joiner/merger tools give you everything you need to manage your videos on mac. This mac converter supports many video formats like AVI, MP4, WMV, MPEG-1,2, YouTube(FLV), Limewire, Realplayer(RM,RMVB), Quicktime(MOV), MKV, MOD, TOD, ASF, 3GP, 3G2, AVCHD/M2TS/MTS/TS/TRP/TS, MXF, etc. Video Converter for Mac features a very clean user interface which makes this task a breeze. You can trim/clip any segments and optionally merge/join and sort them to create your personal movie, crop frame size to remove any unwanted area in the frame just like a pair of smart scissors and set the output video parameters such as video resolution, video frame rate, audio codec, video codec and video quality. Converted videos can be imported into imovie/itunes/FCE/FCP/QuickTime Pro or played on iPad, iPod touch, iPod classic, iPod nano, iPhone, iPhone 3GS, Apple TV, PSP, PS3, Creative Zen, iRiver PMP, Archos, mobile phones and other MP4/MP3 players. Video Converter for Mac makes video conversion easy. Free download now and have a try for yourself! [url=http://www.iorgsoft.com/Video-Editor-for-Mac/]Video Editor for Mac[/url] [url=http://www.iorgsoft.com/Mod-Converter/]mod converter[/url] [url=http://www.iorgsoft.com/Mod-Converter-for-Mac/]mod converter for mac[/url]

    Read the article

  • Physical-to-virtual for personal use

    - by Journeyman Geek
    One of my current projects involves reducing the number of old computers lying around. So far, the office systems have been downsized from 5 to 2. One system which had no data on it was dumped entirely, while three other identical boxes were consolidated into to one (I swap hard drives as needed and they just work(tm)). I want to make further cuts. The physical systems in question all run windows xp sp3, and were old Pentium 4s with 256 MB of RAM. Currently, I have got a VirtualBox running on another system. My intention is to migrate all services to that. We can assume that disk space and RAM on the host is not an issue. I'd rather not go with a client-server physical-to-virtual (P2V) method like what VirtualBox uses now. I have a few specific concerns as well - specifically whether I'd need to reactivate or sysprep for dissimilar hardware first. An offline tool for imaging (Windows or Linux) would not be an issue since the drives could be mounted onto another system I have anyway or I could use a LiveCD. What would be my options for P2V conversion, and what would I need to keep in mind when doing this?

    Read the article

  • Database types for customer analytics

    - by Drewdavid
    I am exploring a paid solution to start providing better embedded, dashboard-style analytics information to our website customers/account holders, but would like to also offer an in-house development option to our team. The more equipped I am with specifics (such as the subject of this question), the better the adoption rate from the team (or so I have found), regardless of the path we choose Would anyone care to summarize a couple of options for a fast and scalable database type through which we would provide the following: • Daily pageviews to a users account pages (users have between 1 and 1000 pages) • Some calculated/compounded metrics (such as conversion rate, i.e. certain page type viewed to contact form thank you page ratio) • We have about 1,500 members (will need room to grow); the number of concurrently logged in users will for the question's sake be 50 I ask because our developer has balked at providing this level of "over time" granularity (i.e. daily) due to the number of space it would take up in a MYSQL database To avoid a downvote I have asked specifically for more than one option, realizing that different people will have different solutions. I will make amendments to my question if so guided by answering parties Thank you for sharing your valued answers :)

    Read the article

  • Interrupted system call during "hg convert"

    - by Aaron Digulla
    When I run "hg convert" to convert a Subversion repository to Mercurial, I get this error: fetching revision log for "/trunk" from 1538 to 0 run hg sink post-conversion action Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 46, in _runcatch return _dispatch(ui, args) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 454, in _dispatch return runcommand(lui, repo, cmd, fullargs, ui, options, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 324, in runcommand ret = _runcommand(ui, options, cmd, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 505, in _runcommand return checkargs() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 459, in checkargs return cmdfunc() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 453, in <lambda> d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) File "/usr/lib/pymodules/python2.6/mercurial/util.py", line 386, in check return func(*args, **kwargs) File "/usr/lib/pymodules/python2.6/hgext/convert/__init__.py", line 229, in convert return convcmd.convert(ui, src, dest, revmapfile, **opts) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 398, in convert c.convert(sortmode) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 312, in convert parents = self.walktree(heads) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 109, in walktree commit = self.cachecommit(n) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 267, in cachecommit commit = self.source.getcommit(rev) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 433, in getcommit self._fetch_revisions(revnum, stop) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 814, in _fetch_revisions for entry in stream: File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 122, in __iter__ entry = pickle.load(self._stdout) IOError: [Errno 4] Interrupted system call abort: Interrupted system call Apparently, it is possible to restart a read on EINTR but how would I do that with pickle.load()? Also I wonder where that signal comes from? I suspect it's SIGCHILD but shouldn't popen() handle that?

    Read the article

  • changing filesystem format from xfs to ext4 without losing data

    - by A.Rashad
    I have a fresh Lucid Lynx (Ubuntu 10.04) running on a laptop. where I defined the filesystems as: mount point / on ext4 (46 Gb) mount point /home on jfs (63 GB) swap as 3 Gb I left the machine over night to do some task, without AC power supply. next day in the morning I found it on standby, task completed, but filesystem was not reachable. it gave me I/O error it seems that there is a problem with jfs and standby. anyways, to avoid any hassle, I want to move this mount point from jfs format to ext4. can I do this without losing data and without the need to place the data in a temporary location until transformation is done? sorry to mention that, but I recall back in the windows days, we would change a FAT16 to FAT32 or a FAT32 to NTFS without having to lose the data. I hope this is available on Linux. Update The /home filesystem was xfs not jfs, and it seems there is a bug with this filesystem for some reason, I had to re-install the OS twice until I ended up with ext4 for the entire / However, as a conclusion, it seems that there is no way to make a conversion

    Read the article

  • How to copy mailboxes from Exchange 2003 to Exchange 2007 across forests?

    - by Tor Ivar Larsen
    Hi. Were going through a quite difficult conversion from an old ASP-solution to an entirely new one. This includes moving mailboxes from Ex2003 to Ex2007. We want to do this without deleting the old mailboxes on the Ex2003 server, to have a "fall back" in case somehing goes wrong. I have investigated the "Move-Mailbox" cmdlet in the Ex2007 shell, and it seems to fit our needs quite well. The only problem being that we want to keep the old mailboxes. This could easily be accomplished with the -SourceMailboxCleanupOptions, but we can't use this when we have used the -AllowMerge switch. The reason we need -AllowMerge is because all the user accounts with connected mailboxes are already created on Ex2007(Some automatic user creation tool, no real relevance to the case in question) The twist is that the exchange servers are in two different forests... Windows 2003 SP1 on DC1, Windows 2003 SP2 on DC2 in forest 1. Windows 2003 R2 SP2 on DC1 in forest 2. Can we use the Move-Mailbox safely for this purpose? And if yes, how?

    Read the article

  • How do I format a text file for IIS Mailroot Pickup so that it sends an e-mail with attachments?

    - by Ben McCormack
    How do I need to format a text file so that it can be read by an SMTP service to send an e-mail that has an attachment? We have a server where we are using II6 SMTP to send mail from a Pickup folder. The goal is to drop a properly formatted text file into Mailroot\Pickup and then the file will be automatically processed and sent to the correct SMTP recipient. For simple files, this works correctly. Here's an example of a simple file that works (domain names changed): From:[email protected] To:[email protected] Subject:Hello World! Test Body Of The E-mail When I drop a text file containing the above contents into the Mailroot\Pickup folder, it sends correctly. However, I haven't been able to figure out how to get an attachment to work. I found some material that explained how to encode an SMTP attachment and another tool for simple base64 encoding conversion. Using the information on those pages, I came up with the following text: From:[email protected] To:[email protected] Subject:Hello World! MIME-Version: 1.0 Content-Type: text/plain; boundary="Attached" Content-Disposition: inline; --Attached Content-Transfer-Encoding: base64 Content-Type: text/plain; name="attachment.txt" Content-Disposition: attachment; filenamename="attachment.txt" VGhpcyBpcyBhIHRlc3Qgb2Ygc29tZXRoaW5nIHRvIGVuY29kZS4NCk5ldyBsaW5lDQpOZXcgTGlu ZQ0KIkhlbGxvdyIgISEhDQo9PT09ICcgZnNkZnNkZiAxMjM1NDU2MzQzNA== --Attached-- However, when I place the above text in a file and drop it into Mailroot\Pickup, it doesn't send an attachment correctly. Instead, an e-mail shows up with the following in the body of the e-mail: MIME-Version: 1.0 Content-Type: text/plain; boundary="Attached" Content-Disposition: inline; --Attached Content-Transfer-Encoding: base64 Content-Type: text/plain; name="attachment.txt" Content-Disposition: attachment; filenamename="attachment.txt" VGhpcyBpcyBhIHRlc3Qgb2Ygc29tZXRoaW5nIHRvIGVuY29kZS4NCk5ldyBsaW5lDQpOZXcgTGlu ZQ0KIkhlbGxvdyIgISEhDQo9PT09ICcgZnNkZnNkZiAxMjM1NDU2MzQzNA== --Attached-- I can't figure out what I need to do to format the text file so that the SMTP service correctly sends attachments.

    Read the article

  • FFmpeg overlay two videos, one input with transparency

    - by Gian B.
    I am trying to create a karaoke from a CD+G file (converted to AVI using FFmpeg) and add a video as a background of the lyrics. Here's a screenshot of a the output from CD+G conversion, for simplicity let's call this lyrics.avi http://imgur.com/wUwHUhV Now a have this video.mp4 file that I'd like to put behind this lyrics.avi Here's a sample of what I'm trying to achieve http://imgur.com/8GuWXtQ I'm sure most of you are familiar with karaoke. I haven't used FFmpeg much and I'm not really sure if what I want to achieve is possible with FFmpeg. Is it possible to overlay two videos, and add a transparency to one of the videos? In this case the colour black? How can I set the offset time of the lyrics.avi? Here's the command the I've tried so far: ffmpeg -i video.mp4 -i lyrics.avi -filter_complex "nullsrc=size=854x480 [base]; [0:v] setpts=PTS-STARTPTS, scale=854x480 [upperleft]; [1:v] setpts=PTS-STARTPTS, scale=854x200 [bottomleft]; [base][upperleft] overlay=shortest=1 [tmp1]; [tmp1][bottomleft] overlay=shortest=1:y=280" -c:v libx264 -y karaoke.mp4

    Read the article

  • Tools to manage sql 2008 database mirroring?

    - by lemkepf
    We are going to be moving about 20 databases that live on a single instance of sql 2000 to a sql 2008 r2 environment with database mirroring. What I'm looking for is a tool or scripts that will help me manage the conversion and management of those 20db's onto this new mirrored environment easily. There are many steps in setting each DB up and I want to automate as much as possible. Edit: Here are the steps I've been doing manually: Create the same username/passwords from the old sql 2000 server onto new sql 2008 server. Then sync those users/passwords onto the other sql 2008 server with the same SSID's so when we do the db backup and restore they match up. Take a backup of each sql 2000 db's. Copy them to server A. Restore the backup to server A. Backup from server a, copy to server b, restore there. Run the mirror "configure security" wizard. Start mirroring. I've love to be able to script this out or have a tool that does it for me. Thanks! Paul

    Read the article

  • How do I log back into a Windows Server 2003 guest OS after Hyper-V integration services installs and breaks my domain logins?

    - by Warren P
    After installing Hyper-V integration services, I appear to have a problem with logging in to my Windows Server 2003 virtual machine. Incorrect passwords and logins give the usual error message, but a correct login/password gives me this message: Windows cannot connect to the domain, either because the domain controller is down or otherwise unavailable, or because your computer account was not found. Please try again later. If this message continues to appear, contact your system administrator for assistance. Nothing pleases me more than Microsoft telling me (the ersatz system administrator) to contact my system administrator for help, when I suspect that I'm hooped. The virtual machine has a valid network connection, and has decided to invalidate all my previous logins on this account, so I can't log in and remotely fix anything, and I can't remotely connect to it from outside either. This appears to be a catch 22. Unfortunately I don't know any non-domain local logins for this virtual machine, so I suspect I am basically hooped, or that I need ophcrack. is there any alternative to ophcrack? Second and related question; I used Disk2VHD to do the conversion, and I could log in fine several times, until after the Hyper-V integration services were installed, then suddenly this happens and I can't log in now - was there something I did wrong? I can't get networking working inside the VM BEFORE I install integration services, and at the very moment that integration services is being installed, I'm getting locked out like this. I probably should always know the local login of any machine I'm upgrading so I don't get stuck like this in the future.... great. Now I am reminded again of this.

    Read the article

  • Windows-to-linux: Putty with SSH and private/public key pair

    - by Johnny Kauffman
    I spent about 3 hours trying to figure out how to connect to a linux box from my windows machine using putty without having to send the password. This is connecting to an Ubuntu server that is using OpenSSH. The private key is SSH-2 RSA, 1024 bits. I am connecting using SSH2. I have run into the more common problems already: Putty generated the public key in the "wrong format". I have corrected this (as seen on this blog post). However, since I am not yet connected, I cannot absolutely confirm that this file is in the correct format. The key is all on a single line now, and I have tried adding/removing line breaks at the end of the file. I've also tried the public file doctoring process a few times to ensure that I haven't flubbed up the manual conversion. Even so, I have no way to verify accuracy here. The permissions were at once point wrong as well, specifically meaning that the file had too many permissions. I had to solve this too and I know it got past this because I no longer see a related error in /var/log/auth.log. I've tried both authorized_keys and authorized_keys2 in case the server has an old version of OpenSSH, but this changed nothing. I do have access as a user. After this keyfile stuff fails, I can enter my password instead The only remaining nibble of information I have is that it claims I have the alleged password wrong: sshd[22288]: Failed password for zzzzzzz from zz.zz.zz.zz port 53620 ssh2 Even so, as far as I can tell, this is just a lazy try/catch somewhere, since I don't think there's a password involved at all. I see nothing else in any of the /var/log files of use. What else could be wrong?

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • how to remove background layer of djvu file

    - by Jon
    Hello, I've downloaded some files from the Internet Archive. They come in different file formats and most of the time I use pdf. However, sometimes the scans are saves in colour instead of b/w. This makes it difficult/impossible to read on a dedicated ebook reader. In that case I downloaded the djvu files as on the PC you can select which layer (color, bw,fore,back) one would like to see. Selecting the bw gives excellent results. However, the ebook reader does not has this option. The question is, how can I remove /extract a layer from the djvu file and save only this layer. So far I've tried the following two approaches: 1) select bw in the djvu viewer on the PC and printed to postscript file. Followed by a ps2pdf conversion. This works, but generates a fairly large pdf file. Sure, I can again upload it to any2djvu but it just seems to much manual work for each file. 2) I tried the shared annotation feature and said (mode bw). This works on the PC as desired but is ignored on the ebook reader as the other layers are still present. Any help or suggestions would be greatly appreciated.

    Read the article

  • Imagemagick convert creates monochrome output only

    - by rumtscho
    I have a book scan as a pdf. When I open it with Adobe Reader, it looks like grayscale. When I open it with IrfanView, it looks like grayscale, and the Information option tells me that the image is actually 24 bit (I don't know if this is the real bit depth of the image embedded in the pdf or if IrfanView assigns the maximal depth when opening a pdf as image). I want to OCR the scan with OmniPage SE. It doesn't read PDF, so I decided to use ImageMagick to convert the file to PNG first. But no matter what I try, the output is always monochrome and practically unreadable. I tried different conversion lines, with different depth, density and resize values, but it didn't help. What you see was made with the options convert testfile.pdf -density 600x600 -depth 8 PNG:testfile.png. Any idea what causes the problem? Edit: To make it clear, the output looks like this for any value of -density, -depth and -resize I have tried. It also looks like that when I use no options at all, as in convert testfile.pdf PNG:testfile.png.

    Read the article

  • Google analytics ignoring "required step" in goals

    - by Matt Huggins
    I am A/B testing a landing page to see which converts better. The funnels are set up exactly the same in terms of the goal completion URL and funnel steps, with one exception: the first step (which is a required step) has a different URL to represent each of the two landing pages. Unfortunately, Google is tracking a conversion for both of these goals regardless of which landing page a user is reaching! It looks like the "required step" is broken...perhaps it is a deeper issue if others haven't noticed it, such as it only not working when the goal URL is the same between multiple goals. Here is an example of what I mean. Goal 1: Goal URL: /users/dashboard (head match) Funnel: 1. /homepages/index1 (required step) 2. /users/register 3. /users/edit Goal 2: Goal URL: /users/dashboard (head match) Funnel: 1. /homepages/index2 (required step) 2. /users/register 3. /users/edit As you can see, the only difference is step #1 of the funnel. Since I am A/B testing the landing page of the site, this should be the only difference. However, when I look at the goal page of Google Analytics, I see that the goal is being recorded for both of these regardless of the landing page being reached. The only tinkering I've done is attempting to wrap each funnel step's goal in ^ and $ characters for an exact regular expression match, but this didn't make a difference. Thoughts?

    Read the article

  • nginx codeigniter rewrite: Controller name conflicts with directory

    - by palerdot
    I'm trying out nginx and porting my existing apache configuration to nginx. I have managed to reroute the codeigniter url's successfully, but I'm having a problem with one particular controller whose name coincides with a directory in site root. I managed to make my codeigniter url's work as it did in Apache except that, I have a particular url say http://localhost/hello which coincides with a hello directory in site root. Apache had no problem with this. But nginx routes to this directory instead of the controller. My reroute structure is as follows http://host_name/incoming_url => http://host_name/index.php/incoming_url All the codeigniter files are in site root. My nginx configuration (relevant parts) location / { # First attempt to serve request as file, then # as directory, then fall back to index.html index index.php index.html index.htm; try_files $uri $uri/ /index.php/$request_uri; #apache rewrite rule conversion if (!-e $request_filename){ rewrite ^(.*)/?$ /index.php?/$1 last; } # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location ~ \.php.*$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } I'm new to nginx and I need help in figuring out this directory conflict with the Controller name. I figured this configuration from various sources in the web, and any better way of writing my configuration is greatly appreciated.

    Read the article

  • Efficient mirroring of directories using hardlinks

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Need software to convert RAR to ZIP ("store" mode - no compression) * Extract->re-archive changes date/time attributes

    - by Larry78
    I have tried all of the following applications (download.cnet.com - the free ones), and none of them will convert an RAR archive to a ZIP, without compressing the files ("store" mode). 7-ZIP would be fine, too. [The RAR is a "solid" archive with password (I know it), files "stored" - no compression, used WinRAR 3.5.1] PeaZip, 7-Zip, FilZip, TugZip, SimplyZipSE, QuickZip, WinShrink. (A couple of the apps let you try, but the program gives an error - indicating how bad the software is. (Like "unknown header # #.") None of these apps will do the conversion at all. IZArc 4.1 comes the closest. It will convert an RAR to a ZIP, but it compresses the zip. There is a general preference setting to "store" - but it doesn't effect conversions. I don't want to extract the RAR files and re-archive them because I need to preserve the modified/created file attributes. IZArc preserves them, but it compresses the files. WinRAR has the option to convert archives, but I get the error "skipping encryped archive" when I try to convert it. It asks for the password first, and I know it's right because that password opens the archive, and I can read/view all the files in the archive.

    Read the article

  • Efficient mirroring of directories using hard links [closed]

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >