Search Results

Search found 27016 results on 1081 pages for 'entry point'.

Page 578/1081 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • How do I modify these VPN connection settings for Xfce?

    - by Dave M G
    I have signed up for a VPN (Virtual Private Network) service, and I configured it for use on my computer that runs Gnome Classic with the following instructions: In Terminal, install openvpn packages with sudo apt-get install network-manager-openvpn. 1. Restart the network manager with sudo restart network-manager 2. Run sudo wget https://www.xxxxxxx.com/ovpnconfigure.zip 3. Extract the files from the zip with unzip ovpnconfigure.zip. 4. Move cert.crt to /etc/openvpn 5. Open the Network Manager on the menu bar. 6. Choose add and select the OpenVPN connection type, and click Create. 7. Enter Private Internet Access SSL for the Connection Name. 8. Enter xxxxxx.xxxxxxxx.com for the Gateway 9. Select Password and enter your login credentials. 10. Browse and select the CA Certificate we saved in Step 3. 11. Choose Advanced and enable LZO Compression. 12. Apply and exit. 13. Connect using the Network Manager. It worked, but now I want to set up access to the same VPN service on another machine that runs Mythbuntu, which uses Xfce as its desktop manager. So every point from 5 on doesn't apply. How can I modify the above instructions so that I can get my VPN service working with Xfce. As a further note, while I can access the Xfce desktop directly if I need to, it's more convenient for me to access it via the command line and SSH from on of my other computers. A command line process would be ideal. (I looked for this, and found instructions only for PPTP access, whereas I need OpenVPN.)

    Read the article

  • Input/output error, when trying to install on netbook [closed]

    - by Ben
    Been trying to install ubuntu on my Samsung NB30 netbook, but I have been running into the same error over and over again. [Errno 5] Input/output error This particular error is often due to a faulty CD/DVD disk or drive, or a faulty hard disk. It may help to clean the CD/DVD, to burn the CD/DVD at a lower speed, to clean the CD/DVD drive lens (cleaning kits are often available from electronics suppliers), to check whether the hard disk is old and in need of replacement, or to move the system to a cooler environment. I'm installing from the USB bootable version, I get the exact same error at the exact same point when trying to install both ubuntu desktop and ubuntu desktop remix. I've tried redownloading both ISOs twice and I've tried two different USB sticks (one being completely new). I've tried installing from with in an ubuntu live session and I get the exact same problem. I've ran a bootable memtest and everything passes with no errors, I've also ran a dmesg in terminal after the installer fails here's what it reported - http://bit.ly/exAQRR Thanks in advance! EDIT: I know this was ages ago, but to anyone out there with the same issue, the problem turned out to be the downloaded image, my internet is poor at the best of times and the ISO failed the MD5sum check, if this happens to you I recommend you download the ISO image by torrent, it'll check the integrity of the file is maintained.

    Read the article

  • Why do some PDFs lag in Adobe Acrobat?

    - by Coldblackice
    I have a handful of PDFs open. One of them in particular is extremely laggy, almost to the point of being unreadable. When I scroll through its pages, it's almost like an extreme version of v-sync being turned off. Very choppy. Overall system resources are plentiful, and all of the other PDFs cruise up and down with no stuttering or problems. I've tried closing and reopening the problem PDF to no avail. It's a small PDF, only 3MB in size, with no graphics (only programming code snippets). Surely, it must be some type of problem with the specific PDF (I'll try opening it in another PDF-viewing program, rather than Acrobat X). Possible corruption? Could there be some type of GPU/hardware-acceleration intervening going on? I've never heard of such with PDF-viewing.

    Read the article

  • How do I set up different configurations on an Ubuntu laptop based on different physical locations?

    - by Andrew Larned
    I'm looking for a way to have a couple (or three) multiple configurations set up on my laptop, and easily switch between them. To be more specific, when my laptop is at work, it's plugged into a second monitor, and has a specific set of networks configurations. At home, the second monitor is gone, the network configurations are different. At a public wireless point there are other configurations to set, etc. I know I can go into my preferences and turn on/turn off the monitor, and mess with the networking preferences, and so on, but I'm looking for a way to change a bunch of preferences all at once, and if it's possible to do that automatically, maybe based on the wireless APs in the vicinity, that would be even better.

    Read the article

  • Source of Unexplained Requests in Server Logs

    - by Synetech inc.
    Hi, I am baffled by some entries in my server logs, specifically the web-server logs. Other than normal, expected traffic, I have noticed three types of request errors (eg 404, etc.): Broken links, ie links from old, external pages that point to pages that are no longer here Sequences of probes, ie some jerk trying to hack in by scanning my server for a series of exploitable admin type pages and such What appear to be completely random requests for things that have never existed on the server or even have anything to do with the server, and appear by themselves (ie not a series of requests like the probes) Could it somehow be a mistyped URL or IP? That’s about the only thing that I can think of, but still, how could I get a request on say, foobar.dyndns.org (12.34.56.78) for something like www.wantsfly.com/prx2.php or /MNG/LIVE or http://ant.dsabuse.com/abc.php?auth=45V456b09m&strPassword=X%5BMTR__CBZ%40VA&nLoginId=43. (Those are a few actual requests from my logs.) Can someone please explain scenario three to me? Thanks.

    Read the article

  • Creating IIS Rewrite Rules

    - by Tom Bell
    I'm having a hard time converting old .htaccess rewrite rules to new IIS ones so I was wondering if anyone could point me in the right direction. Below are some example URLs I would like rewriting. http://example.org.uk/about/ Rewrites to http://example.org.uk/about/about.html ----------- http://example.org.uk/blog/events/ Rewrites to http://example.org.uk/blog/events.html ----------- http://example.org.uk/blog/2010/11/foo-bar Rewrites to http://example.org.uk/blog/2010/11/foo-bar.html The directories and file names are generic and could be anything. Any help would be greatly appreciated.

    Read the article

  • verisign certificate into jboss server SSL

    - by rfders
    i'm trying to enable jboss to uses ssl protocol using a previously generated certificate from verisign, i imported both certificate, server certificate and ca certificate into the keytore file, and i configured the server.xml to use that keystore and activate ssl protocol, then when i run the jboss, I got this error "certificate or key corresponds to the SSL cipher suites which are enabled" Question, reading some post on internet, i found that every example was made it generating a Certificate Request, it stricly necesary to do that if i already have the server certificate and that CSR has to be imported into the keystore as well ? at this point i'm very confused about this issue, i tried almost every solutions posted in several forums but till now i haven't any luck !! can you give me some tips in order to solve this problem. thanks in advance this are my keystore file: Keystore type: jks Keystore provider: SUN Your keystore contains 2 entries j2ee, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): 69:CC:2D:2A:2D:EF:C4:DB:A2:26:35:57:06:29:7D:4C ugent, Dec 29, 2009, trustedCertEntry, Certificate fingerprint (MD5): AC:D8:0E:A2:7B:B7:2C:E7:00:DC:22:72:4A:5F:1E:92 and my server.xml configuration:

    Read the article

  • rsnapshot intervals in configuration file...

    - by Patrick
    A simple question about rsnapshot. In order to perform daily backups I'm going to add lines to cron in my Ubuntu. Then, why do I have also these lines in the rsnapshot.conf ? ######################################### # BACKUP INTERVALS # # Must be unique and in ascending order # # i.e. hourly, daily, weekly, etc. # ######################################### interval hourly 6 interval daily 7 interval weekly 4 #interval monthly 3 If I use cron, should I disable them ? thanks ps. I've just realized that in the crontab I still have "hourly" and "daily". Should I then uncomment only the one I use in the crontab ? And what's the point to specify hourly if it is already specified in cron ? I'm a bit confused. # crontab -e 0 */4 * * * /usr/local/bin/rsnapshot hourly 30 23 * * * /usr/local/bin/rsnapshot daily

    Read the article

  • Approach for packing 2D shapes while minimizing total enclosing area

    - by Dennis
    Not sure on my tags for this question, but in short .... I need to solve a problem of packing industrial parts into crates while minimizing total containing area. These parts are motors, or pumps, or custom-made components, and they have quite unusual shapes. For some, it may be possible to assume that a part === rectangular cuboid, but some are not so simple, i.e. they assume a shape more of that of a hammer or letter T. With those, (assuming 2D shape), by alternating direction of top & bottom, one can pack more objects into the same space, than if all tops were in the same direction. Crude example below with letter "T"-shaped parts: ***** xxxxx ***** x ***** *** ooo * x vs * x vs * x vs * x o * x * xxxxx * x * x o xxxxx xxx Right now we are solving the problem by something like this: using CAD software, make actual models of how things fit in crate boxes make estimates of actual crate dimensions & write them into Excel file (1) is crazy amount of work and as the result we have just a limited amount of possible entries in (2), the Excel file. The good things is that programming this is relatively easy. Given a combination of products to go into crates, we do a lookup, and if entry exists in the Excel (or Database), we bring it out. If it doesn't, we say "sorry, no data!". I don't necessarily want to go full force on making up some crazy algorithm that given geometrical part description can align, rotate, and figure out best part packing into a crate, given its shape, but maybe I do.. Question Well, here is my question: assuming that I can represent my parts as 2D (to be determined how), and that some parts look like letter T, and some parts look like rectangles, which algorithm can I use to give me a good estimate on the dimensions of the encompassing area, while ensuring that the parts are packed in a minimal possible area, to minimize crating/shipping costs? Are there approximation algorithms? Seeing how this can get complex, is there an existing library I could use? My thought / Approach My naive approach would be to define a way to describe position of parts, and place the first part, compute total enclosing area & dimensions. Then place 2nd part in 0 degree orientation, repeat, place it at 180 degree orientation, repeat (for my case I don't think 90 degree rotations will be meaningful due to long lengths of parts). Proceed using brute force "tacking on" other parts to the enclosing area until all parts are processed. I may have to shift some parts a tad (see 3rd pictorial example above with letters T). This adds a layer of 2D complexity rather than 1D. I am not sure how to approach this. One idea I have is genetic algorithms, but I think those will take up too much processing power and time. I will need to look out for shape collisions, as well as adding extra padding space, since we are talking about real parts with irregularities rather than perfect imaginary blocks. I'm afraid this can get geometrically messy fairly fast, and I'd rather keep things simple, if I can. But what if the best (practical) solution is to pack things into different crate boxes rather than just one? This can get a bit more tricky. There is human element involved as well, i.e. like parts can go into same box and are thus a constraint to be considered. Some parts that are not the same are sometimes grouped together for shipping and can be considered as a common grouped item. Sometimes customers want things shipped their way, which adds human element to constraints. so there will have to be some customization.

    Read the article

  • Problem inserting pre-ordered values to quadtree

    - by Lucas Corrêa Feijó
    There's this string containing B, W and G (standing for black, white and gray). It defines the pre-ordered path of the quadtree I want to build. I wrote this method for filling the QuadTree class I also wrote but it doesn't work, it inserts correctly until it reaches a point in the algorithm it needs to "return". I use math quadrants convention order for insertion, such as northeast, northwest, southwest and southeast. A quadtree has all it's leafs black or white and all the other nodes gray The node used in the tree is called Node4T and has a char as data and 4 references to other nodes like itself, called, respectively NE, NW, SW, SE. public void insert(char val) { if(root==null) { root = new Node4T(val); } else { insert(root, val); } } public void insert(Node4T n, char c) { if(n.data=='G') // possible to insert { if(n.NE==null) { n.NE = new Node4T(c); return; } else { insert(n.NE,c); } if(n.NW==null) { n.NW = new Node4T(c); return; } else { insert(n.NW,c); } if(n.SW==null) { n.SW = new Node4T(c); return; } else { insert(n.SW,c); } if(n.SE==null) { n.SE = new Node4T(c); return; } else { insert(n.SE,c); } } else // impossible to insert { } } The input "GWGGWBWBGBWBWGWBWBWGGWBWBWGWBWBGBWBWGWWWB" is given a char at a time to the insert method and then the print method is called, pre-ordered, and the result is "GWGGWBWBWBWGWBWBW". How do I make it import the string correctly? Ignore the string reading process, suppose the method is called and it has to do it's job.

    Read the article

  • 503 error Varnish cache when eAccelerator is started

    - by Netismine
    I have a Magento installation running on x-large Amazon server. I have Varnish, memcached and eAccelerator installed on the server. At first everything was working fine, but then at some point it stopped working, throwing 503 error with Varnish cache stamp below it. When I disable eaccelerator, error is gone and site is working. This is my eaccelerator config: extension="eaccelerator.so" eaccelerator.shm_size = "512" eaccelerator.cache_dir = "/var/cache/php-eaccelerator" eaccelerator.enable = "1" eaccelerator.optimizer = "1" eaccelerator.debug = 0 eaccelerator.log_file = "/var/log/httpd/eaccelerator_log" eaccelerator.name_space = "" eaccelerator.check_mtime = "1" eaccelerator.filter = "" eaccelerator.shm_ttl = "0" eaccelerator.shm_prune_period = "0" eaccelerator.shm_only = "0" eaccelerator.allowed_admin_path = "" any hints?

    Read the article

  • Naming a class that processes orders

    - by p.campbell
    I'm in the midst of refactoring a project. I've recently read Clean Code, and want to heed some of the advice within, with particular interest in Single Responsibility Principle (SRP). Currently, there's a class called OrderProcessor in the context of a manufacturing product order system. This class is currently performs the following routine every n minutes: check database for newly submitted + unprocessed orders (via a Data Layer class already, phew!) gather all the details of the orders mark them as in-process iterate through each to: perform some integrity checking call a web service on a 3rd party system to place the order check status return value of the web service for success/fail email somebody if web service returns fail constantly log to a text file on each operation or possible fail point I've started by breaking out this class into new classes like: OrderService - poor name. This is the one that wakes up every n minutes OrderGatherer - calls the DL to get the order from the database OrderIterator (? seems too forced or poorly named) - OrderPlacer - calls web service to place the order EmailSender Logger I'm struggling to find good names for each class, and implementing SRP in a reasonable way. How could this class be separated into new class with discrete responsibilities?

    Read the article

  • Linux script to find time difference and send an email if need

    - by Gnanam
    Hi, I'm not an expert in writing shell scripts but also I'm looking for a very specific solution. OS: CentOS release 5.2 (Final) I've a Java standalone which keeps writing (all System.out.println) to a log file. For some unknown reason, this Java standalone stops working at some point of time in my server and eventually logs writing also stops working. I want to have a script which checks the last modified date & time of the log file with current date & time in the server. If the time difference exceeds more than 5 minutes, I want to send an email immediately to my recipients list. This way I'll come to know when this Java standalone has stopped working. I'll move this script to crontab and make it run for every 1 minute, so that this whole process is automated. Log file location: /usr/local/logs/standalone.log

    Read the article

  • Move all images in folder to subfolder, and update all references to those images to their new location?

    - by Professor Frink
    I have a folder which contains a ~50 text files (PHP) and hundreds of images. I would like to move all the images to a subfolder, and update the PHP files so any reference to those images point to the new subfolder. I know I can move all the images quite easily (mv *.jpg /image, mv *.gif /image, etc...), but don't know how to go about updating all the text files - I assume a Regex has to be created to match all the images in a file, and then somehow the new directory has to be appended to the image file name? Is this best done with a shell script? Any help is appreciated (Server is Linux/CentOs5) Thanks!

    Read the article

  • IIS cannot access itself

    - by dave
    We are on a corporate network that uses ISA and I am having issues trying to not have requests go through ISA. I have IIS7 on my local Windows 7 machine that has websites and a service layer. The websites access the service layer using a xxxx.servicelayer.local address that is set up in my HOSTS file to point to 127.0.0.1. I have Windows Firewall client which I have disabled. I have tried both adding this address into IE so that it does not go through ISA and also disabled this section altogether. When the website (which is actually IIS making the request to itself) tries to access the service layer I receive an ISA error that proxy authentication has failed. Considering that everything I can see to configure is set to not go through the proxy, ISA, I cannot see how this is actually going through the proxy and giving this error. Is there something within Windows 7 that forces the proxy setting, some sort of caching or similar?

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • Technicolor TG582n with external DHCP server [on hold]

    - by Jack
    We have a small home setup with a Technicolor TG582n on Plusnet ISP. We have a Samba4 DC with DNS forwarding enabled. The DC forwards to the Technicolor. However, the client machines have their DNS settings manually set. This is an annoyance when using laptops on other networks. We would like to have DHCP handled on the server machine, such that when a client connects to the Technicolor, it gets its IP and DNS information from the DHCP server, eliminating the need to manually set adapter DNS settings. However, I cannot find an option to disable DHCP on the Technicolor and am not completely clear on how one would point DHCP services to the server from the Technicolor if there were the option. So, how would one make the Technicolor use an external server for DHCP leases?

    Read the article

  • UUID in Mountain Lion

    - by Naji
    I am trying to find my external HDD UUID in Mountain Lion but diskutil info /dev/disk1s1 returns: Najis-MacBook-Air:~ ****$ diskutil info disk1s1 Device Identifier: disk1s1 Device Node: /dev/disk1s1 Part of Whole: disk1 Device / Media Name: Untitled 1 Volume Name: My Book Escaped with Unicode: My%FF%FE%20%00Book Mounted: Yes Mount Point: /Volumes/My Book Escaped with Unicode: /Volumes/My%FF%FE%20%00Book File System Personality: NTFS Type (Bundle): ntfs Name (User Visible): Windows NT File System (NTFS) Partition Type: Windows_NTFS OS Can Be Installed: No Media Type: Generic Protocol: USB SMART Status: Not Supported Total Size: 2.0 TB (2000364240896 Bytes) (exactly 3906961408 512-Byte-Blocks) Volume Free Space: 212.5 GB (212506509312 Bytes) (exactly 415051776 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Yes Ejectable: Yes Whole: No Internal: No And there is no UUID. What is wrong exactly? Thank you.

    Read the article

  • Should you always pass the bare minimum data needed into a function

    - by Anders Holmström
    Let's say I have a function IsAdmin that checks whether a user is an admin. Let's also say that the admin checking is done by matching user id, name and password against some sort of rule (not important). In my head there are then two possible function signatures for this: public bool IsAdmin(User user); public bool IsAdmin(int id, string name, string password); I most often go for the second type of signature, thinking that: The function signature gives the reader a lot more info The logic contained inside the function doesn't have to know about the User class It usually results in slightly less code inside the function However I sometimes question this approach, and also realize that at some point it would become unwieldy. If for example a function would map between ten different object fields into a resulting bool I would obviously send in the entire object. But apart from a stark example like that I can't see a reason to pass in the actual object. I would appreciate any arguments for either style, as well as any general observations you might offer. I program in both object oriented and functional styles, so the question should be seen as regarding any and all idioms.

    Read the article

  • Outlook new message size nearly 1mb

    - by Yossi Dahan
    I've been using Outlook 2010 for several weeks with no issues. Suddently, a few days ago, the size of my outgoing messages got huge. Looking at thsi it appeas that a huge CSS style is beign created with around 14,000 definition for list items, making the message almost 1mb before I even typed in one word. Emails before that point were very small. Needless to say I can't remember changing anything, nor can anyone around here provide any possible explanation... Any ideas?

    Read the article

  • When to use delaycompress option in logrotate?

    - by Anand Chitipothu
    The man page of logrotate says that: It can be used when some program cannot be told to close its logfile and thus might continue writing to the previous log file for some time. I'm confused by this. If a program cannot be told to close its logfile, it will continue to write forever, not for sometime. If the compression is postponed to next rotation cycle, the program continues to write to that file even after the next rotation cycle. How is postponing solving the problem? My understanding is that copytruncate should be used when a program cannot be told to close the logfile. I'm aware that some data written to the logfile gets lost when the copy is in progress. I was looking at the logrotate file for couchdb, and it had both copytruncate and delaycompress options. /usr/local/couchdb-1.0.1/var/log/couchdb/*.log { weekly rotate 10 copytruncate delaycompress compress notifempty missingok } It looks like there is no point using delaycompress when copytruncate is already there. What am I missing?

    Read the article

  • Radeon 4670 Card Has 2 DVI Outs But Only Works When One is used in Win7 x64

    - by ssmith
    I have an ATI XFX Radeon 4670 1GB video card in a Windows 7 Ultimate x64 setup. If I only use one monitor, all is well. If I plug in a second DVI cable to the card, both monitors go dark. I can get one or the other to display, but only when it is the only one plugged into the card. I'm really hoping for a dual-monitor setup here as that was the point of the card. I'm using ATI Radeon HD 4600 Series video drivers dated 11/24/09 (8.681.0.0) which are the latest from their site and are supposed to work with Vista or Win7 x64. When the computer boots with both displays connected, both monitors display the inital bios and boot screens. The Win7 splash screen also displays, but by the time it goes to the login screen, one monitor is offline.

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • .NET Dependency Management Systems

    - by StriplingWarrior
    I have some .NET projects that are starting to get large enough to merit looking into Dependency Management solutions, so we don't have to copy binaries from one project to another. Here's what I've found so far: NPanday is based on a port of Maven. I can't tell how recently it was worked on, but the last release was in May 2011. NuGet seems to be under active development, and it appears to have support directly from Microsoft. Some people complained that it "only addresses dependency resolution," but I don't know what else it should address, or whether it has added more features since that point. It does appear to have recently added the ability to import binaries as part of the build process so we don't have to commit them to our repositories. Refix appears to still be in Beta, after having received no attention since Sept 2011. Would somebody with recent experience using any of these dependency management tools (or any others that work well) share your experience? Is NuGet mature enough to use it for dependency management? If not, what does it lack?

    Read the article

  • Samba as a PDC and offline authentication

    - by Aimé Barteaux
    Say I have a Windows laptop which has been connected to a domain. The domain has a Samba server as a PDC. Now say that I move the laptop outside of the network (the network is completely inaccessible). Will I be able to logon into accounts I have accessed before on the laptop (through GINA)? Update: Looking at the smb.comf documentation I noticed the setting winbind offline logon: This parameter is designed to control whether Winbind should allow to login with the pam_winbind module using Cached Credentials. If enabled, winbindd will store user credentials from successful logins encrypted in a local cache.. To me it looks like this solves the issue but can anyone else confirm it and/or point out if any additional values need to be set?

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >