Search Results

Search found 28957 results on 1159 pages for 'single instance'.

Page 605/1159 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • Difference between SSLCertificateFile and SSLCertificateChainFile?

    - by chrisjlee
    Normally with a virtual host an ssl is setup with the following directives: Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt From: For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work? What is the difference between SSLCertificateFile and SSLCertificateChainFile ? The client has purchased a CA key from GoDaddy. It looks like GoDaddy only provides a SSLCertificateFile (.crt file), and a SSLCertificateKeyFile (.key file) and not at SSLCertificateChainFile. Will my ssl still work without a SSLCertificateChainFile path specified ? Also, is there a canonical path where these files should be placed?

    Read the article

  • Clustering Strings on the basis of Common Substrings

    - by pk188
    I have around 10000+ strings and have to identify and group all the strings which looks similar(I base the similarity on the number of common words between any two give strings). The more number of common words, more similar the strings would be. For instance: How to make another layer from an existing layer Unable to edit data on the network drive Existing layers in the desktop Assistance with network drive In this case, the strings 1 and 3 are similar with common words Existing, Layer and 2 and 4 are similar with common words Network Drive(eliminating stop word) The steps I'm following are: Iterate through the data set Do a row by row comparison Find the common words between the strings Form a cluster where number of common words is greater than or equal to 2(eliminating stop words) If number of common words<2, put the string in a new cluster. Assign the rows either to the existing clusters or form a new one depending upon the common words Continue until all the strings are processed I am implementing the project in C#, and have got till step 3. However, I'm not sure how to proceed with the clustering. I have researched a lot about string clustering but could not find any solution that fits my problem. Your inputs would be highly appreciated.

    Read the article

  • JiglibX addition to existing project questions

    - by SomeXnaChump
    Got a very simple existing project, that basically contains a lot of cubes. Now I am wanting to add a physics system to it and JiglibX seemed like the simplest one with some tutorials out there. My main problem is that the physics don't seem to be working how I imagined, I expected my tower of cubes to come crashing down, but they dont seem to do anything. I think my problem is that my cubes do not inherit DrawableGameComponent, they are managed by a world object that will update and render them. So they are at no point put into the games component list. I am not sure if this means that JiglibX will not be able to interact with them as in all the tutorials there are no explicit calls to add the Body objects to the physics system, so I can only presume that they are using a static/singleton under the hood which automatically hooks in all things, or they use the game objects component list somehow. I also noticed that in alot of the tutorials they use the following when setting up the physics system: float timeStep = (float)gameTime.ElapsedGameTime.Ticks / TimeSpan.TicksPerSecond; PhysicsSystem.CurrentPhysicsSystem.Integrate(timeStep); Would it not be better to keep a local instance of the created PhysicsSystem object and just call myPhysicsSystem.Integrate(timeStep)?

    Read the article

  • How is it possible for SSD's drives to have such a good latency?

    - by tigrou
    First time i read some information about SSD's, i was surprised to learn they internally use NAND flash chips. This kind of memory is generally slow (low bandwidth) and have high latency while SSD's are just the opposite. But here is how it works : SSD drives increase their bandwidth by using several NAND flash chips in parallel. In other words, they do some data striping (aka RAID0) across several chips (done by the controller). What i don't understand is how SSD's drives have such a low latency, whereas they are using NAND chips? (or at least lot better than what a typical single NAND chip would do) EDIT: I think under-estimate NAND chip capabilities. USB drives, while powered by NAND's are mostly limited by USB protocol (which have a pretty high latency) and the USB controller. That explain their poor performance in some cases.

    Read the article

  • How to use chain.p7b with Apache?

    - by Debianuser
    I wanted to setup a SSL website on Apache and applied for a certificate from my local ISP. All they sent me was a single file named chain.p7b. I have always used certificates from other vendors without any issues but they usually provide two files to be configured as SSLCertificateFile and SSLCertificateChainFile in Apache. Following instructions from several online resources, I opened the p7b file in Windows and extracted 4 certificates from the file. I then tried configuring Apache with one of the files and it worked, but shows a warning: The certificate is not trusted because no issuer chain was provided. I though I have to use remaining 3 files as SSLCertificateChainFile and/or SSLCACertificateFile. I tried that but it didn't work so I am assuming it might be something completely different. Anyone faced this issue before? The following page http://www-01.ibm.com/support/docview.wss?uid=swg21458997 talks about using a keystore but is that relevant to Apache?

    Read the article

  • options for deploying application

    - by terence
    I've created a simple web application, a self-contained tool with a user system. I host it publicly for everyone to use, but I've gotten some requests to allow companies to host the entire application privately on their internal systems. I have no idea what I'm doing - I have no experience with deployment or server stuff. I'm just some person who learned enough JS and PHP to make a tool for my own needs. The application runs with Apache, MySQL, and PHP. What's the best way to package my application to let others run it privately? I'm assuming there's better options than just sending them all the source code. I'd like to find a solution that is: Does not require support to set up (I'm just a single developer without much free time) Easy to configure Easy to update Does there exist some one-size-fits all thing that I can give to someone, they can install it, and bam, now when they go to http://myapplication/ on their intranet, it works? Thanks for your help.

    Read the article

  • Let's do the Time Warp again!

    - by Mike Dietrich
    Once you start reading about Daylight Saving Time changes in MyOracleSupport you'll find still a lot of notes explaining this and that and back and forth. But sometimes there seems to be a bit too much information - and lacking clear instructions. Once a customer called that the "Time Zone Spaghetti" after reading MOS notes about DST for several hours ending up with the note where he has begun to read before still not clear what to do now I'm using usually the scripts from MOS Note:977512.1 as you'll just have to exchange the DST version you are upgrading to and it has everything you need to check and adjust the time zone data in the database - for instance after applying the DST V18 patch to your database's homes. As a reminder to myself when traveling I have stored a copy of the script part of that note here - and please note that this is not an official Oracle version. Always read and check the original MOS Note:977512.1 as it may have gotten changed in between and may contain changes or corrections and as it has a lot of more explainationary information than I could cover here. And credit to Gunter Vermeir from Oracle Support, who is the owner of that MOS Note and has compiled all that useful stuff together. DST_prepare.sql DST_adjust.sql

    Read the article

  • Migrating Gmail to Office 365

    - by user218699
    Good Morning, I have been setting up Office 365 for my organization. We are currently using Gmail. I have synced our local Active Directory server w/ Office 365, as well as our domains. The problem I am having has to do with migrating mailboxes from Gmail to Office 365. I have been using this article to walk me through the process: http://technet.microsoft.com/en-us/library/dn568114.aspx The issue arises when I begin to sync the mailboxes. Currently I have been trying to sync my own mailbox as a test. The synchronization process has been going on for about 15 hours (for just one mailbox) with no errors or any information given by Office 365, other than the "Syncing" status on the migration page in the Exchange Admin Center. Is syncing a single mailbox supposed to take this long, or have I missed a step? Thanks!

    Read the article

  • What is the best resource or method for learning more about servers and networking?

    - by kaleidomedallion
    I can get around a server to some degree, but everything I have learned has been as I have needed to learn it. Every once in a while I will learn about some new command that will revolutionize how I think about systems and networking, and realize how I could have been using it this whole time if only I had known about it. Anything to bring someone from novice to some kind of fluency? Do I just need to earn the battle scars bit by bit over the course of years? (I mean of course I do but what can I do to help it be not quite so painful?) For instance, I am still fuzzy on all of networking. Any help would be greatly appreciated.

    Read the article

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • Using nested public classes to organize constants

    - by FrustratedWithFormsDesigner
    I'm working on an application with many constants. At the last code review it came up that the constants are too scattered and should all be organized into a single "master" constants file. The disagreement is about how to organize them. The majority feel that using the constant name should be good enough, but this will lead to code that looks like this: public static final String CREDITCARD_ACTION_SUBMITDATA = "6767"; public static final String CREDITCARD_UIFIELDID_CARDHOLDER_NAME = "3959854"; public static final String CREDITCARD_UIFIELDID_EXPIRY_MONTH = "3524"; public static final String CREDITCARD_UIFIELDID_ACCOUNT_ID = "3524"; ... public static final String BANKPAYMENT_UIFIELDID_ACCOUNT_ID = "9987"; I find this type of naming convention to be cumbersome. I thought it might be easier to use public nested class, and have something like this: public class IntegrationSystemConstants { public class CreditCard { public static final String UI_EXPIRY_MONTH = "3524"; public static final String UI_ACCOUNT_ID = "3524"; ... } public class BankAccount { public static final String UI_ACCOUNT_ID = "9987"; ... } } This idea wasn't well received because it was "too complicated" (I didn't get much detail as to why this might be too complicated). I think this creates a better division between groups of related constants and the auto-complete makes it easier to find these as well. I've never seen this done though, so I'm wondering if this is an accepted practice or if there's better reasons that it shouldn't be done.

    Read the article

  • Creating New WebSphere cluster

    - by user154561
    I need to improve WAS perfomance and want to make cluster. I have two separate machines with WebSpphere 7 on it. As I see to do this, I need to add node from my second (remote) WAS to the first one. I try to use "Add node" from console, but without success. It can't find host when I try to execute it. WAS help says about host: Specifies the host name or IP address of the node to add to the cell. A WebSphere Application Server instance must be running on this machine. Does that mean that I cannot add node from a remote machine with "add node?"

    Read the article

  • Hyper V cluster - one VM won't migrate

    - by Chris W
    We have a Failover Cluster built up on 6 blades, each running Hyper V. Each box is running Server 2008 R2. We've got a number of VMs running that all have the same basic config: VHD stored on a cluster shared volume. 2 virtual NICs (1 for LAN connection and 1 for SAN connection). All of our VMs will happily migrate between any other blade apart from one single VM which is running fine on it's current blade but will not migrate to any other location. What could be the cause of it or where should I look to get a detailed error message as I can't seem to find much information logged in any of the logs. Edit: I know the usual culprit is mis-matching resource names. We've already been there with the NICs named differently on some of the blades. As far as we can tell now everything looks to be identical on each bit of metal.

    Read the article

  • How to prevent Network Manager from auto creating network connection profiles with "available to everyone" by default

    - by airtonix
    We have several laptops at work which use Ubuntu 11.10 64bit. I have our Wifi Access Point requiring WPA2-EAP Authentication (backed by a LDAP server). I have the staff using these laptops when doing presentations by using the Guest Account. So by default when you have a wifi card, network manager will display available Wireless Access Points. So the logical course of action for a Novice(tm) user is to single left click the easy to use option in the Network Manager drop down list... At this point the Staff Member (who is logged in with the guest account) expects to just be able to connect and enter any authentication details if required. But because they are using the Guest account, they won't ever have admin permissions (nor do I want them to), and so PolKit kicks in with a request for admin authorisation. I solved this part by modifying the PolKit permissions required to allow all users to create System Network Connections... However, because these Staff members are logging onto the Wifi Access Point with Ldap Credentials and because the Network Manager is now saving those credentials as a System Connection, their password is available for the next guest user session (because system connection profiles are stored in /etc/NetworkManager/system-connections.d/* ). It creates system connections by default because "Available to all users" is ticked by default when you quickly connect to a new wifi access point. I want Network Manager to not tick this by default. This way I can revert the changes I made to Polkit and users network connection profiles will be purged when they log out.

    Read the article

  • How to enable a Web portal-based rich enterprise platform on different domains and hosts using JS without customization/ server configuration

    - by S.Jalali
    Our company Coscend has built a Web portal-based communications and cloud collaboration platform by using JavaScript (JS), which is embedded in HTML5 and formatted with CSS3. Other technologies used in the core code include Flash, Flex, PostgreSQL and MySQL. Our team would like to host this platform on five different Windows and Linux environments that run different types of Web servers such as IIS and Apache. Technical challenge: Each of these Windows and Linux servers have a different host name and domain name (and IP address), but we would like to keep our enterprise platform independent of host server configuration. Possible approach to solution: We think an API (interface module with a GUI) is needed to accomplish this level of modularity and flexibility while deploying at our enterprise customers. Seeking your insights: In this context, our team would appreciate your guidance on: Is there an algorithmic method to implement this Web portal-based platform in these Windows and Linux environments while separating it from server configuration, i.e., customizing the host name, domain name and IP address for each individual instance? For example, would it be suitable to create some JS variables / objects for host name and domain name and call them in the different implementations? If a reference to the host/domain names occurs on hundreds of portal modules, these variables or JS objects would replace that. If so, what is the best way to make these object modules written in JS portable and re-usable across different environments and instances for enterprise customers? Here is an example of the implemented code for the said platform. The following Web site (www.CoscendCommunications.com) was built using this enterprise collaboration platform and has the base code examples of the platform. This Web site is domain-specific. We like to make the underlying platform such that it is domain and host-independent. This will allow the underlying platform to be deployed in multiple instances of our enterprise customers.

    Read the article

  • Wine causes twin view to break

    - by deanvz
    I have endlessly been playing around with the Nvidia X Server settings and changing my xorg.conf file to try and work for me and on most days its fine. In each instance I get it working for a while and then this morning the most bizarre thing happens. The moment I open any type of Wine program (which never use to be a problem) my Twin View setup disappears and am left with mirrored displays. I try and change the settings in the Nvidia driver, but its not interested and the screens remain mirrored. I have a work around. Restart my pc... Below are the contents of my current xorg.conf file. # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:43:34 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "LG Electronics W1934" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9400 GT" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1440+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Are scheduled job servers the right choice for a time sensitive game engine?

    - by maple_shaft
    I am currently architecting and designing an exciting new web application that will be entering into some areas that I have very little experience in, game development. The application is not necessarily a game, but there are some very time sensitive tasks and scheduled jobs that a server will need to run to perform game related activities (Eg. New match up starts at noon every day for a 12 day tournament, updating scoreboards at 5pm every day, etc...) In the past I have typically used cron jobs with the Quartz Scheduler running within a web application server, but I know that this isn't likely a scalable solution for the truly massive userbase that management is telling me to expect (Granted they are management and are probably highly optimistic about this) and also for how important the role of these tasks are in this web application. The other important thing I want to consider is that I want to avoid SPOF (Single Point Of Failure). If the primary job server goes down, another job server should be able to successfully run the job in its place. I suppose this can be done appropriately record locking and database transactions. My question is if scheduled jobs like CRON running on a web application server are a wise design choice given the time sensitive game tasks of this application, or is there something more appropriate for running a scalable game engine parallel to the web application servers?

    Read the article

  • Where will the image be saved? [closed]

    - by Dummy Derp
    import java.awt.Dimension; import java.awt.Rectangle; import java.awt.Robot; import java.awt.Toolkit; import java.awt.image.BufferedImage; import javax.imageio.ImageIO; import java.io.File; ... public void captureScreen(String fileName) throws Exception { Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); Rectangle screenRectangle = new Rectangle(screenSize); Robot robot = new Robot(); BufferedImage image = robot.createScreenCapture(screenRectangle); ImageIO.write(image, "png", new File(fileName)); } ... Going over this code I found on the internet. I got everything except the part where file is created. In what format file name should be? Should it be C:/myFolder/myImage.png" or just myImage.png and where will it be saved? Here is what docs say: File public File(String pathname) Creates a new File instance by converting the given pathname string into an abstract pathname. If the given string is the empty string, then the result is the empty abstract pathname. Parameters: pathname - A pathname string Throws: NullPointerException - If the pathname argument is null

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Combine OS partion with data partition on NAS4Free/FreeNAS

    - by Pak
    I recently built a NAS4Free (formerly FreeNAS) machine using a 256MB (yes, MB) USB drive for the OS. When I did the original install, I had the bright idea of making the OS partition just big enough for the OS and a then creating a second partition using the remainder of the drive to store stuff pertaining to the OS. I never really found a use for the data partition and I ended up running out of space on the OS partition, so now I'd like to combine the partitions into a single partition. Is this something that is possible to do while everything is up and running? If it comes down to it, I can take down the machine and do a fresh install of the OS using the entire space of the USB drive, but I'd like to use this as an opportunity to better familiarize myself with FreeBSD/UNIX type systems. If this is possible, will it interfere with the NAS4Free things? The data partition shows up in the web interface under the disks section. If I end up manually changing the partitions, I'd be concerned with NAS4Free getting confused by the missing partition.

    Read the article

  • Random compositing lag

    - by user1020567
    My laptop specs: 512 mb of RAM, out of which 64 mb are shared with an integrated GPU - ATI Radeon Xpress 200 M. Intel 1,6 Ghz Celeron M single-core processor. I've spent months trying to figure out why compositing and effects sometimes lag on any distro I try. Now I've come to realise that no matter what drivers I try (the default ones work for me on pretty much any linux) compositing lag is random. When I used Ubuntu 10.10, for example, sometimes window compositing would lag and sometimes it wouldn't. The PC is able to render those effects so hardware is not the problem. It's completely random and unpredictable - sometimes when I turn on the computer the effects lag horribly and sometimes it's completely smooth. I've also checked startup items and there doesn't seem to be any unnecessary entries. I also tried building my own OS with Arch Linux and the problem persists there, therefore I can only assume that it's a driver issue of some sort. By default there are lots of drivers supplied with linux distributions. Could it be that they're in the way? The ones that I need are ati/radeon (or both? What's the difference between them?) and there seem to be a lot of others... What should I do?

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • Why is a software development life-cycle so inefficient?

    - by user87166
    Currently the software development lifecycle followed in the IT company I work at is: The "Business" works with a solution manager to build a Business Requirement document The solution manager works with the Program manager to build a Functional Spec The PM works with the engineering lead to develop a release plan and with the engineering team to develop technical specifications If there are any clarifications required, developers contact the PM who contacts the solution manager who contacts the business and all the way back introducing a latency of nearly 24 hours and massive email chains for any clarifications By the time the tech spec is made, nearly 1 month has passed in back and forth Now, 2 weeks go to development while the test writes test cases Code is dropped formally to test, test starts raising bugs. Even if there is 1 root cause for 10 different issues, and its an easily fixed one, developers are not allowed to give fresh code to test for the next 1 week. After 2-3 such drops to test the code is given to the ops team as a "golden drop" ( 2 months passed from the beginning) Ops team will now deploy the code in a staging environment. If it runs stable for a week, it will be promoted to UAT and after 2 weeks of that it will be promoted to prod. If there are any bugs found here, well, applying for a visa requires less paperwork This entire process is followed even if a single SSRS report is to be released. How do other companies process such requirements? I'm wondering why, the business cannot just drop the requirements to developers, developers build and deploy to UAT themselves, expose it to the business who raise functional bugs and after fixing those promote to prod. (even for more complex stuff)

    Read the article

  • Is the Entity Component System architecture object oriented by definition?

    - by tieTYT
    Is the Entity Component System architecture object oriented, by definition? It seems more procedural or functional to me. My opinion is that it doesn't prevent you from implementing it in an OO language, but it would not be idiomatic to do so in a staunchly OO way. It seems like ECS separates data (E & C) from behavior (S). As evidence: The idea is to have no game methods embedded in the entity. And: The component consists of a minimal set of data needed for a specific purpose Systems are single purpose functions that take a set of entities which have a specific component I think this is not object oriented because a big part of being object oriented is combining your data and behavior together. As evidence: In contrast, the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program. Instead, the data is accessed by calling specially written functions, commonly called methods, which are bundled in with the data. ECS, on the other hand, seems to be all about separating your data from your behavior.

    Read the article

  • OS X: What does the '@' attribute on a file mean?

    - by claytontstanley
    On a Snow Leopard machine, at the Terminal: la ~/src/rmcl/ | grep RMCL -rw-r--r--@ 1 claytonstanley staff 6766167 Nov 13 2009 RMCL What is that '@' attribute? This file is part of an older OS X program that runs under Rosetta. I'm having issues where some older programs running under Rosetta require the @ attribute when opening files. But I'm not sure what that attribute is, so I have no way to know how to add/remove it. I did try a thorough Google search on this, but I wasn't able to find the answer. I would have thought this would be an easy one to find. Maybe the Google query isn't acting properly because of the single @ special character. Any info. is much appreciated. Thanks!

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >