Search Results

Search found 1596 results on 64 pages for 'tim fraud'.

Page 24/64 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Performance associated with storing millions of files on NTFS

    - by Tim Brigham
    Does anyone have a method / formula, etc that I could use - hopefully based on both current and projected numbers of files - to project the 'right' length of the split and the number of nested folders? Please note that although similar it isn't quite the same as Storing a million images in the filesystem. I'm looking for a way to help make the theories outlined more generic. Assumptions I have 'some' initial number of files. This number would be arbitrary but large. Say 500k to 10m+. I have considered the underlying physical hardware disk IO requirements that would be necessary to support such an endeavor. Put another way As time progresses this store will grow. I want to have the best balance of current performance and as my needs increase. Say I double or triple my storage. I need to be able to address both current needs and projected future growth. I need to both plan ahead and not sacrifice too much of current performance. What I've come up with I'm already thinking about using a hash split every so many characters to split things out across multiple directories and keeping the trees even, very similar as outlined in the comments in the question above. It also avoids duplicate files, which would be critical over time. I'm sure that the initial folder structure would be different based on what I've outlined, and depending on the initial scale. As far as I can figure there isn't a one size fits all solution here. It would be horrendously time intensive to work something out experimentally.

    Read the article

  • How can I disable Control-W in Windows XP?

    - by Tim Visher
    I'm an Emacs/Mac user trapped on Windows at work and I often type Control-W from muscle memory to delete a word and thus by mistake kill the entire window, including everything that I was working on. This a particularly egregious problem for Console2 as I run GNU Screen and am often doing many things at once. Is there any way to completely disable Control-W or remap it to something that is far harder to type? Thanks!

    Read the article

  • Why is my filesystem being mounted read-only in linux?

    - by Tim
    I am trying to set up a small linux system based on Gentoo on a VirtualBox machine, as a step towards deploying the same system onto a low-spec Single Board Computer. For some reason, my filesystem is being mounted read-only. In my /etc/fstab, I have: /dev/sda1 / ext3 defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/shm tmpfs defaults 0 0 However, once booted /proc/mounts shows rootfs / rootfs rw 0 0 /dev/root / ext3 ro,relatime,errors=continue,barrier=0,data=writeback 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,nosuid,relatime,size=10240k,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /dev/shm tmpfs rw,relatime 0 0 usbfs /proc/bus/usb usbfs rw,nosuid,noexec,relatime,devgid=85,devmode=664 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 (the above may contain errors: there's no practical way to copy and paste) The partition at /dev/hda1 is clearly being mounted OK, since I can read all the data, but it's not being mounted as described in fstab. How might I go about diagnosing / resolving this? Edit: I can remount with mount -o remount,rw / and it works as expected, except that /proc/mounts reports /dev/root mounted at / rather than /dev/sda1 as I'd expect. If I try to remount with mount -a I get mount: none already mounted or /sys busy mount: according to mtab, sysfs is already mounted on /sys Edit 2: I resolved the problem with mount -a (the same error was occuring during startup, it turned out) by changing the sysfs and proc lines to proc /proc proc [...] sysfs /sys sysfs [...] Now mount -a doesn't complain, but it doesn't result in a read-write root partition. mount -o remount / does cause the root partition to be remounted, however.

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • Windows 8 with LiveID login authenticates as Guest to remote SQl Server

    - by Tim Long
    I have a network where several users are using Office Accounting 2009 in multi-user client/server mode. OA is built on SQL Server. One PC acts as the 'server' and has the SQl Server instance, the others have only the application installed and no SQL instance, all of the apps connect remotely to the SQL instance on the 'server'. I'm using the term 'server' loosely here, it is just a normal workstation that happens to be designated as the server and runs the SQL instance. There is no NT domain, all user accounts are local accounts. The way that OA works in multi-user mode is that each user is required to have a local account with the same username and password on both the client and 'server' PCs. This has been working well, no along comes Windows 8. I use my 'Microsoft Account' aka LiveID to log into Windows 8. Office Accounting runs fine and attempts to connect to the database, but fails, 'you do not have permission to perform this operation'. In the SQL logs, I get this error: 2012-10-28 17:54:01.32 Logon Error: 18456, Severity: 14, State: 11. 2012-10-28 17:54:01.32 Logon Login failed for user 'SERVER\Guest'. Reason: Token-based server access validation failed with an infrastructure SERVER is the hostname of the server. So it seems to be authenticating as 'Guest'?? To verify this, I enabled the Guest account on the 'server' PC and then added Guest as an allowed user within Office Accounting (this simply creates the user in SQL and gives it an appropriate database role). Sure enough, My Windows 8 PC was then able to connect to the database when using Office Accounting. Clearly, having users authenticate as 'Guest' stinks from a security and auditing standpoint. So what I need are some ideas for how to work around this. I've tried switching the Windows 8 PC to a 'local account' and that works too, but requires giving up significant functionality on the Windows 8 PC. What I really need is a way to force the Windows 8 PC to use a specific set of credentials when connecting to the remote SQL instance. Office Accounting takes the logged in username, which is my LiveID and doesn't correspond to any Windows user name. Anyone solved this issue?

    Read the article

  • Split MPEG video from command line?

    - by Tim
    I have a homemade DVD that I'm effectively trying to insert chapters into and rearrange - the original author burned it as one long chapter, and I'd like to rip it into smaller pieces and re-encode it into a new DVD. I ripped the DVD with the following command: mplayer dvd:// -dvd-device /dev/sr2 -dumpstream -dumpfile raw.vob I'm running Gentoo Linux with mplayer version 1.0-rc2_p20090731 (the latest available in Portage). I have a list of times that the chapters are supposed to span (for example 30:11-33:25), so my first thought was to rip the entire DVD and use mpgtx to cut out certain pieces of the file. My issue is that running mpgtx -i on the file reports quite a few timestamp jumps: Time stamps jumped from 59.753789 to 0.001622 at position 1d29800 Time stamps jumped from 204963823030450.343750 to 31.165900 at position 2d4f800 Time stamps jumped from 60.077878 to 0.001622 at position 43cc000 Time stamps jumped from 60.024233 to 0.001622 at position 65c5000 Time stamps jumped from 204963823068631.718750 to 52.549244 at position 7fd1000 I've tried to fix the indexes using: mencoder raw.vob -oac copy -ovc copy -forceidx -o fixed.vob -of mpeg But mpgtx will still report timestamp issues. My immediate question: is there a way to take the ripped movie I have and correct its timestamps so I can cut it with mpgtx? If I can get that one issue out of the way, building the rest of the DVD will be smooth sailing. If it's not possible to fix the timestamps on this file: is there a better way to rip small chunks of the DVD into separate files for recompilation later? I'd very much like this to be done on Linux, and it'd be even better if I could script it somehow (feed in a list of start and end positions, or start times and durations, and get out a series of ripped files). If need be, I also have a Mac OS X machine available, but no Windows. Edit: I wound up finding another solution involving HandBrake and ffmpeg (with help from this question), but the question stands. Edit again: Turns out my other solution didn't quite work - the audio desynchronized by about five seconds, in about half of my cut mpgs - so I'm back to square one. Anyone?

    Read the article

  • One time use FTP passwords with C-Panel/WHM?

    - by Tim Post
    I'm in a position where I need to give about a dozen people one shot FTP access to a domain in order to upload their work. I'd like to use single shot passwords, e.g once they login and upload, that's it. Single use. I don't see any obvious means of doing this conveniently with C-Panel. Prior to going through the bother of writing a WHM add on to accomplish the same, I'd like to make sure that I'm not re-inventing the wheel. Thanks in advance.

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • ImageMagic convert

    - by Tim
    convert is not working on the Linux server I am using $ convert exploss_stumps.jpg exploss_stumps.eps convert: missing an image filename `exploss_stumps.eps' @ convert.c/ConvertImageCommand/2838 Any idea why?

    Read the article

  • Export SSL Cert from IIS and import into GlassFish keystore

    - by Tim H
    What I need: I have an existing SSL certificate installed on IIS 6. On the same machine, I have GlassFish installed and would like to share the same certificate since they both share the same hostname, and they use different ports: IIS uses 443 and GlassFish uses 8181. Why I need it: Reuse existing SSL certs from IIS to GlassFish. I imagine that this is possible. I am able to install an SSL cert into GlassFish's keystore, and then import the same exact cert into IIS. I just want to go the other way - imagine having an SSL cert on IIS being used for months, and now I want to enable SSL on GlassFish. What I have done: Created a keystore with an alias: server.hostname.com Imported intermediate CA certs associated with the existing SSL Cert Imported the existing SSL cert with the same alias: server.hostname.com, but the keytool won’t allow this, as it is not associated: keytool error: java.lang.Exception: Public keys in reply and keystore don't match Why? Using a different alias causes the cert to not be trusted in the CA chain.

    Read the article

  • Remove (or add) Entry in Indicator-Applet (Ubuntu/GNOME)

    - by Tim Lytle
    I can't seem to find a guide or reference on how to configure the 'indicator-applet' (aka MessagingMenu) that came about in the 9.04 release of Ubuntu. It's that little mail icon that lists messaging apps. I can find docs about what it should do, people complaining about how it works, references that the API changed in 9.10, but not much on how to change the configuration. The MessagingMenu design spec page says that the config file should be at $HOME/.config/indicators/messages/applications/, but there's nothing there on my install (9.10).

    Read the article

  • SQL Server Agents jobs and turning off the server

    - by Tim Joseph
    I'm really new to SQL Agent jobs, but I am attempting to build up a maintainance regime for a server that will be turned off and on again at unknown intervals. It may run without being shutdown for a month, or it might only be turned on 9-5... we don't know and the client can't tell us because they don't know. So what I'm wondering is, what do I need to do to get SQL Server to run monthly and daily jobs either when they are due, or if the due date is missed, get them to be run when the server is next powered on. I could come up with a mish-mash of periodic jobs and 'on-power-up' jobs, but if there is something more elegant that would be wonderful. Obviously I'll need to ensure the SQL Server Agent is configure to start when the computer is powered up, but what else?

    Read the article

  • LDAP, Active Directory and bears, oh my!

    - by Tim Post
    What I have: Workstations running Ubuntu Jaunty mounting /home on a remote NFS server. User accounts are still created locally on each individual workstation. Workstations running Windows XP / Vista NFS server (as noted above) Windows 2008 server All machines share a single private network (LAN). What I need to accomplish: A single, intuitive (GUI driven) place for an office administrator to create user accounts. This should let anyone login to their (linux or windows) workstation, then fire up remote desktop and use the same login to the Windows 2008 server, from any machine on the network. I have read so much on samba, LDAP vs AD, etc and now I'm even more confused than I was before I began researching the problem. Ideally, Linux and Windows users should be able to get to their local files once logged into the Win2008 server. I am a programmer, not an interoperability guru and I'm completely lost on where to even start trying to accomplish this, plus I've run out of things to Google. How would you do this? Is it even possible?

    Read the article

  • SSD causing 100% CPU usage in Apache/PHP

    - by Tim Reynolds
    I wanted to increase the performance on my development laptop so I added an Intel 320 Series SSD as my primary drive. Everything is amazingly fast, as expected, except Apache/PHP. I develop Magento by using an Ubuntu 10.10 virtual machine. Information: Host OS: Win 7 Professional 64bit Guest OS: Ubuntu 10.10 32bit Processor: i7 Chipset QM55 SSD: Intel 320 Series 160gb 30% full HDD: Hitachi 320gb 50% full (in side bay using an adapter) Laptop: Lenovo T510 Using: Shared folders Apache Version: 2.2.16 PHP Version: 5.3.3-1 APC Version: 3.1.3p1 APC Memory: 128M Using tmpfs for cache, log, session directories in Magento In the VM running on the SSD (VM files and source files are on the same drive) loading a product page in the Admin takes on average 26.2 seconds and uses 100% CPU for nearly the entire time. In the VM running on the old HDD loading the same page takes on average 4.4 seconds. It mostly uses around 40-50% of the CPU while rendering the page. I have read this post: Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)? It says to change some settings in the bios. I have turned any and all power management features off in the bios. I can't for the life of me understand why this would be happening.

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • Anyone fix IBM Thinkpad (T40) laptops in NY/NJ and charge a success fee (ie. only the laptop is fixe

    - by Tim
    Are there any master technicians (or shops) out there in NY/NJ who would be willing to work with my two IBM Thinkpad T40 laptops? They both seem to have hardware/mechanical problems and I am sure can be fixed if inspected by the right ppl. But I don't to pay unless someone can fix them. Do you know anyone in NY/NJ who does business such that they won't charge me anything if they can't fix my laptop?

    Read the article

  • Checking for orphaned snapshots - ESXi5

    - by Tim Alexander
    So we had some issues with our passive mail node over the weekend doing vmtools updates and to resolve a problem we had to revert to a snapshot and then reseed all the databases across. All in all everything seemed fine, the server works and CCR copy status is running fine. I used the "Delete All" option this morning to remove the snapshot and the process according to vCenter has completed with no errors and no "Needs Consolidation" flag. This all seems fine until I check the Datastore that holds the VM on our SAN and I can clearly see snapshots that are pretty big [see attached image]. These do not seem to be changing size and the data modified is around the time the works were started for the vmtools update. Does this possibly mean that at some stage, possibly during reversion or hard resetting of the VM, that they have become orphaned? Are there any methods to check orphaned status of snapshots? We are running ESXi5.0 Update 1 with storage provide by an EMC SAN. Enterprise plus is the license level.

    Read the article

  • Driver problems with rerunning WIM capture

    - by Tim Brigham
    How do I handle driver problems associated with creating a new WIM file using a previously created image as a base? I am using a custom MDT image which installs Windows 7, Office 2010, etc which was created from a previously deployed base image of Windows 7 + SP 1 + updates. When the driver injection is enabled I receive a group of messages (collected from different logs, screens, etc). If I turn the preinstall driver injection off things work as expected. If I leave it on I receive the following messages. windows cannot install required files 0x80004005 8004004 non zero return code ltiapply rc=31 Unhandled error returned by LTIApply: a device attached to the system is not functioning. (-2147024865 0x8007001F)

    Read the article

  • Change Linux Console's Default Monitor

    - by Tim M
    Is there any way to specify which monitor the console is displayed on in Linux? Details: I have a 3 monitor setup with 2 video cards. When I boot the computer, the BIOS displays on the PCI graphics card (which has a small monitor). When starting Linux, the console is displayed on the same monitor. Is there a way to have the console output on a different monitor? I'm using the vesafb framebuffer. I don't see a way in my BIOS to change the default video card.

    Read the article

  • Expired password change through VPN failure

    - by Tim Alexander
    I am setting up some new accounts to be used by some contractors. they are going to connect via VPN to our network. My requirement is to set the password initially and then have them change it the first time they log in. As a result the "User must Change Password" box is checked. Loading up a laptop and testing has yielded poor results. When logging in I get a notification that the password has expired and a box to fill in, which I do. it then appears again so I dutifully fill in the password details again. I am then presented with a "Sending Password...." error box with Error:619 listed as the reason. Trying to reconnect then gives a 691 error that the password is bad. From the firewall, that is the actualy VPN server, I can see RAD_ACCESS_DENIED and from the DC running NPS (acting as a RADIUS server for the firewall with MS-CHAP-v2 enabled with the "User can change password after it has expired" checked) I cannot see a request to change the password. I can only see Event ID 4776, 4625 and 6273 (reason 16). I can log in with out the change password flag fine so I know logins are being authenticated. Really hoping someone might be able to assist in tracking down the lack of password change processin gon the DC.

    Read the article

  • Missing ISOBurn context menu

    - by Tim Jarvis
    This is a slightly updated version of the question I asked here I have isoburn.exe in the system32 directory of my machine, however I am missing the context menu when right clicking an ISO image ("Burn Disk Image") Does anyone know how I can restore the context menu?

    Read the article

  • emule cannot create Incoming Files directory

    - by Tim
    Hi, I have installed emule on Windows 7. When I start emule, I will receive a message "failed to create Incoming Files directory 'C:\Program Files\eMule\Incoming' - Access is denied", similar message for "C:\Program Files\eMule\Temp" and "Failed to initialize cryptokeys - secure ident disabled". It looks like Windows 7 does not allow emule to make such operation. How can I give emule the right to do what it needs? Thanks!

    Read the article

  • Migrate an intermediate CA to a new root

    - by Tim Brigham
    Using the Microsoft CA is there any way to cut over to a new certificate authority from an intermediate authority? Both my systems are Microsoft CAs - I have a 2008 R2 Enterprise CA (intermediate) and an old 2003 CA (root). The 2003 box bit the dust and I don't have good backups. I still have a few months before the CRL expires; instead of having to cut over to a new intermediate authority is there a ready way to simply point this intermediate authority to a new offline CA?

    Read the article

  • Can I get simple name resolution on the local network without DNS, on a Mac?

    - by tim
    I've got Mantis running on a Linux VM with a Win2k8 server host. I installed Samba with the following configuration: [global] workgroup = COMPANY netbios name = MANTIS security = share Now on all our windows machines people can simple go to http://mantis, rather than http://172.16.0.20. However, this doesn't appear to work on the Mac machines. Any ideas how I can sort this without changing anything on the Windows server?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >