Search Results

Search found 27207 results on 1089 pages for 'preferred solution'.

Page 283/1089 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Why does LogonUser place user profiles in c:\users of the server?

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior? I have tried LOGON32_LOGON_BATCH but it was giving an error 1385 in authentication user. I need either of the solution 1) Is there any way to stop the folder generation. 2) What parameter I need to pass this to work? Thanks

    Read the article

  • WIN32 API question - Looking for answer asap

    - by Lalit_M
    We have developed a ASP.NET web application and has implemented a custom authentication solution using active directory as the credentials store. Our front end application uses a normal login form to capture the user name and password and leverages the Win32 LogonUser method to authenticate the user’s credentials. When we are calling the LogonUser method, we are using the LOGON32_LOGON_NETWORK as the logon type. The issue we have found is that user profile folders are being created under the C:\Users folder of the web server. The folder seems to be created when a new user who has never logged on before is logging in for the first time. As the number of new users logging into the application grows, disk space is shrinking due to the large number of new user folders getting created. Has anyone seen this behavior with the Win32 LogonUser method? Does anyone know how to disable this behavior? I have tried LOGON32_LOGON_BATCH but it was giving an error 1385 in authentication user. I need either of the solution 1) Is there any way to stop the folder generation. 2) What parameter I need to pass this to work? Thanks

    Read the article

  • Zend Metadata Cache in file

    - by Matthieu
    I set up a metadata cache in Zend Framework because a lot of DESCRIBE queries were executed and it affected the performances. $frontendOptions = array ('automatic_serialization' => true); $backendOptions = array ('cache_dir' => CACHE_PATH . '/db-tables-metadata'); $cache = Zend_Cache::factory( 'Core', 'File', $frontendOptions, $backendOptions ); Zend_Db_Table::setDefaultMetadataCache($cache); I can indeed see the cache files created, and the website works great. However, when I launch unit tests, or a script of the same application that perform DB queries, I end up with an error because Zend couldn't read the cache files. This is because in the website, the cache files are created by the www user, and when I run phpunit or a script, it tries to read them with my user and it fails. Do you see any solution to that? I have some quickfix ideas but I'm looking for a good/stable solution. And I'd rather avoid running phpunit or the scripts as www if possible (for practical reasons).

    Read the article

  • How can I get size in bytes of an object sent using RMI?

    - by Lucas Batistussi
    I'm implementing a cache server with MongoDB and ConcurrentHashMap java class. When there are available space to put object in memory, it will put at. Otherwise, the object will be saved in a mongodb database. Is allowed that user specify a size limit in memory (this should not exceed heap size limit obviously!) for the memory cache. The clients can use the cache service connecting through RMI. I need to know the size of each object to verify if a new incoming object can be put into memory. I searched over internet and i got this solution to get size: public long getObjectSize(Object o){ try { ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(o); oos.close(); return bos.size(); } catch (Exception e) { return Long.MAX_VALUE; } } This solution works very well. But, in terms of memory use doesn't solve my problem. :( If many clients are verifying the object size at same time this will cause stack overflow, right? Well... some people can say: Why you don't get the specific object size and store it in memory and when another object is need to put in memory check the object size? This is not possible because the objects are variable in size. :( Someone can help me? I was thinking in get socket from RMI communication, but I don't know how to do this...

    Read the article

  • Find existence of number in a sorted list in constant time? (Interview question)

    - by Rich
    I'm studying for upcoming interviews and have encountered this question several times (written verbatim) Find or determine non existence of a number in a sorted list of N numbers where the numbers range over M, M N and N large enough to span multiple disks. Algorithm to beat O(log n); bonus points for constant time algorithm. First of all, I'm not sure if this is a question with a real solution. My colleagues and I have mused over this problem for weeks and it seems ill formed (of course, just because we can't think of a solution doesn't mean there isn't one). A few questions I would have asked the interviewer are: Are there repeats in the sorted list? What's the relationship to the number of disks and N? One approach I considered was to binary search the min/max of each disk to determine the disk that should hold that number, if it exists, then binary search on the disk itself. Of course this is only an order of magnitude speedup if the number of disks is large and you also have a sorted list of disks. I think this would yield some sort of O(log log n) time. As for the M N hint, perhaps if you know how many numbers are on a disk and what the range is, you could use the pigeonhole principle to rule out some cases some of the time, but I can't figure out an order of magnitude improvement. Also, "bonus points for constant time algorithm" makes me a bit suspicious. Any thoughts, solutions, or relevant history of this problem?

    Read the article

  • jQuery variable and object caching

    - by niksy
    This is something that has been bugging me some time and every time I found myself using different solution for this. So, I have link in my document which on click creates new element with some ID. <a href="#" id="test-link">Link</a> For the purpose of easier reusing, I would like to store that new elements ID in a variable which is jQuery object var test = $('#test'); On click I append that new element on body, new element is DIV $('body').append('<div id="test"/>'); And here goes the main "problem" - if I test this new elements length with test.length it first returns 0 and later 1. But, when I test it with $('#test').length it returns 1 from the start. I suppose it is some caching mechanism and I was wondering is there better, all-around solution which will allow to store elements in variables in the start for later repurpose and in the same time work with dynamically created elements. Live, delegate, something else? What I do sometimes is create string and add it to jQuery object but I think this is just avoiding the real issue. Also, using .find() inside another jQuery object. Thanks in advance.

    Read the article

  • Managing modes in Windows application working directly with SQL Server 2008

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know it's not these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • is there any way to get country names from iPhone APIs?

    - by chiemekailo
    Hi, I want to provide a picker to the user to select a country. I've had a look in the NSLocale docs and I can get a list of all of the country codes supported on the device, which feels like a start: Code: NSArray *codes = [[NSArray alloc] initWithArray:[NSLocale ISOCountryCodes]]; But I can't work out if there is a programatic way to convert this array into country names... Ideally this list would be localized as well, i.e. show the country name in the user's preferred language. Can anyone point me in the right direction?

    Read the article

  • port an iOS (iPhone) app to mac?

    - by William Jockusch
    Is there a preferred way to go about this? The app in question is not too large . . . single-player game that I wrote over the course of a couple of months. EDIT: I should add that I have no experience with mac development . . . outside of what comes naturally with being an iOS developer. EDIT: Classes heavily used in the game: subclasses of NSObject, UIView, and UIViewController. I don't know much about NSView, but I'm pretty sure all the UIView stuff will work in that class. Also some use of UITableViewController. I do also have Game Center, but I can leave that part out for now. There is no multi-touch.

    Read the article

  • What would a compress method do in a hash table?

    - by Bradley Oesch
    For an assignment I have to write the code for a generic Hash Table. In an example Put method, there are two lines: int hash = key.hashCode(); // get the hashcode of the key int index = compress(hash); // compress it to an index I was of the understanding that the hashCode method used the key to return an index, and you would place the key/value pair in the array at that index. But here we "compress" the hash code to get the index. What does this method do? How does it "compress" the hash code? Is it necessary and/or preferred?

    Read the article

  • Folding a list in F#

    - by bytebuster
    I have a pretty trivial task but I can't figure out how to make the solution prettier. The goal is taking a List and returning results, based on whether they passed a predicate. The results should be grouped. Here's a simplified example: Predicate: isEven Inp : [2; 4; 3; 7; 6; 10; 4; 5] Out: [[^^^^]......[^^^^^^^^]..] Here's the code I have so far: let f p ls = List.foldBack (fun el (xs, ys) -> if p el then (el::xs, ys) else ([], xs::ys)) ls ([], []) |> List.Cons // (1) |> List.filter (not << List.isEmpty) // (2) let even x = x % 2 = 0 let ret = [2; 4; 3; 7; 6; 10; 4; 5] |> f even // expected [[2; 4]; [6; 10; 4]] This code does not seem to be readable that much. Also, I don't like lines (1) and (2). Is there any better solution?

    Read the article

  • Google sheet dynamic WHERE clause for query() statement

    - by jason_cant_code
    I have a data table like so: a 1 a 2 b 3 b 4 c 5 c 6 c 7 I want to pull items out of this table by dynamically telling it what letters to pull. My current formula is: =query(A1:B7,"select * where A ='" & D1 & "'"). D1 being the cell I wish to modify to modify the query. I want to be able input into D1 -- a, a,b, a,b,c and have the query work. I know it would involve or statements in the query, but haven't figured out how to make the formula dynamic. I am looking for a general solution for this pattern: a -- A = 'a' a,b -- A = 'a' or A = 'b' a,b,c -- A = 'a' or A = 'b' or A='c' Or any other solution that solves the problem. Edit: So far I have =ArrayFormula(CONCATENATE("A='"&split(D3,",")&"' or ")) this gives A='a' or A='b' or A='c' or for a,b,c. can't figure out how to remove the last or.

    Read the article

  • Javascript Image object without instantiating

    - by user276027
    This question is about javascript performance. Consider 3 examples for illustration: function loadImgA() { new Image().src="http://example.com/image.gif" } function loadImgA1() { Image().src="http://example.com/image.gif" } function loadImgB() { var testImg = new Image(); testImg.src="http://example.com/image.gif" } Now the point is I don't really need to manipulate the the image object after it was created, hence loadImgA(). The question is, what happens if nothing is assigned to the return value of the new Image() constructor - in that case I can actually skip the 'new' keyword as in loadImgA1()? Does the object then live outside the function or somehow affects memory usage? Other implications, differences? I reckon not, as no real instance was actually created? To put this into perspective, I only need to get the http request for image through. No preloading or other advanced image manipulation. What would be the preferred method from the above?

    Read the article

  • Are "strings.xml" string arrays always parsed/deserialized in the same order?

    - by PhilaPhan80
    Can I count on string arrays within the "strings.xml" resource file to be parsed/deserialized in the same order every time? If anyone can cite any documentation that clearly spells out this guarantee, I'd appreciate it. Or, at the very least, offer a significant amount of experience with this topic. Also, is this a best practice or am I missing a simpler solution? Note: This will be a small list, so I'm not looking to implement a more complicated database or custom XML solution unless I absolutely have to. <!--KEYS (ALWAYS CORRESPONDS TO LIST BELOW ??)--> <string-array name="keys"> <item>1</item> <item>2</item> <item>3</item> </string-array> <!--VALUES (ALWAYS CORRESPONDS TO LIST ABOVE ??)--> <string-array name="values"> <item>one</item> <item>two</item> <item>three</item> </string-array>

    Read the article

  • [jquery] Autogrow Textarea '+' Parent div height

    - by Shishant
    Hello, My Html is like this: <div style="height: 90px;" class="tabtextarea"> <div>Description:</div> <textarea style="height: 85px;">TEXT...</textarea> </div> As you can see textarea is contained within a div with specific height so that it doesnt break layout on some browsers due to float styling and etc I am using. I saw few jquery plugins that resize the textarea but is there any solution with which I can resize parent div too? There is no need for live resizing, resizing just once when page is loaded is preferred as the data is already populated in it. Thank You.

    Read the article

  • Detecting modifier keys held down during startup in OS/X (or Windows)?

    - by Tom Swirly
    I've searched here and not found any question that really covers this. I have a cross-platform Windows-OS/X application in which I'd like to be able to detect whether modifier keys like shift or control are being held down while the application starts up. We'd like to do this to allow the application to start up without reading its preferences file, in case that somehow gets corrupted (we saw in testing a prefs bug, now fixed, that made the window size 0 by 0, for example). We're using the excellent and comprehensive cross-platform C++ library named Juce. Unfortunately, Juce's master tells me that he believes this is impossible on OS/X at least since you only get keyboard events and there is no way to read the state of the keys unless something changes. Is this true? Or is there some way around this? I'm almost sure I've used Mac programs that used this mechanism to bypass their preferences. Or... stepping up one level... is there another solution to providing the functionality of "run the program but don't read the prefs file" other than "holding a key down while launching the program"? This is consumer software so we can't expect too much from the user. The final solution will end up being a cross-platform one so hints on the Windows side will also be appreciated. Thanks, and be well! I'll report in with progress on my end.

    Read the article

  • [PHP] Associating a Function to Fire on session_start()?

    - by user317808
    Hi, I've searched the web but haven't been able to find a solution to the following challenge: I'd like to somehow associate a function that executes when session_start is called independent of the page session_start is called in. The function is intended to restore constants I've stored in $_SESSION using get_defined_constants() so that they're available again to any PHP page. This seems so straightforward to me but I'm pretty sure the PHP Session extension doesn't support the registration of user-defined events. I was wondering if anyone might have insight into this issue so I can either figure out the solution or stop trying. Ideally, I'd like to just register the function at run-time like so: $constants = get_defined_constants(); $_SESSION["constants"] = $constants["user"]; function event_handler () { foreach ($_SESSION["constants"] as $key => $value) { define($key, $value); } } register_handler("session_start", "event_handler"); So in any webpage, I could just go: session_start(); and all my constants would be available again. Any help would be greatly appreciated.

    Read the article

  • Finding k elements of length-n list that sum to less than t in O(nlogk) time

    - by tresbot
    This is from Programming Pearls ed. 2, Column 2, Problem 8: Given a set of n real numbers, a real number t, and an integer k, how quickly can you determine whether there exists a k-element subset of the set that sums to at most t? One easy solution is to sort and sum the first k elements, which is our best hope to find such a sum. However, in the solutions section Bentley alludes to a solution that takes nlog(k) time, though he gives no hints for how to find it. I've been struggling with this; one thought I had was to go through the list and add all the elements less than t/k (in O(n) time); say there are m1 < k such elements, and they sum to s1 < t. Then we are left needing k - m1 elements, so we can scan through the list again in O(n) time looking for all elements less than (t - s1)/(k - m1). Add in again, to get s2 and m2, then again if m2 < k, look for all elements less than (t - s2)/(k - m2). So: def kSubsetSumUnderT(inList, k, t): outList = [] s = 0 m = 0 while len(outList) < k: toJoin = [i for i in inList where i < (t - s)/(k - m)] if len(toJoin): if len(toJoin) >= k - m: toJoin.sort() if(s0 + sum(toJoin[0:(k - m - 1)]) < t: return True return False outList = outList + toJoin s += sum(toJoin) m += len(toJoin) else: return False My intuition is that this might be the O(nlog(k)) algorithm, but I am having a hard time proving it to myself. Thoughts?

    Read the article

  • Should I use `import os.path` or `import os`?

    - by Denilson Sá
    According to the official documentation, os.path is a module. Thus, what is the preferred way of importing it? # Should I always import it explicitly? import os.path Or... # Is importing os enough? import os Please DON'T answer "importing os works for me". I know, it works for me too right now. What I want to know is any official recommendation about this issue. So, if you answer this question, please post your references.

    Read the article

  • Use of .apply() with 'new' operator. Is this possible?

    - by Premasagar
    In JavaScript, I want to create an object instance (via the new operator), but pass an arbitrary number of arguments to the constructor. Is this possible? What I want to do is something like this (but the code below does not work): function Something(){ // init stuff } function createSomething(){ return new Something.apply(null, arguments); } var s = createSomething(a,b,c); // 's' is an instance of Something The Answer From the responses here, it became clear that there's no in-built way to call .apply() with the new operator. However, people suggested a number of really interesting solutions to the problem. My preferred solution was this one from Matthew Crumley (I've modified it to pass the arguments property): var createSomething = (function() { function F(args) { return Something.apply(this, args); } F.prototype = Something.prototype; return function() { return new F(arguments); } })();

    Read the article

  • Duplicate values multi array

    - by BETA911
    As the title states I'm searching for a unique solution in multi arrays. PHP is not my world so I can't make up a good and fast solution. I basically get this from the database: http://pastebin.com/vYhFCuYw . I want to check on the 'id' key, and if the array contains a duplicate 'id', then the 'aantal' should be added to each other. So basically the output has to be this: http://pastebin.com/0TXRrwLs . Thanks in advance! EDIT As asked, attempt 1 out of many: function checkDuplicates($array) { $temp = array(); foreach($array as $k) { foreach ($array as $v) { $t_id = $k['id']; $t_naam = $k['naam']; $t_percentage = $k['percentage']; $t_aantal = $k['aantal']; if ($k['id'] == $v['id']) { $t_aantal += $k['aantal']; array_push($temp, array( 'id' => $t_id, 'naam' => $t_naam, 'percentage' => $t_percentage, 'aantal' => $t_aantal, ) ); } } } return $temp; }

    Read the article

  • CSS challenge: Two background images, centered column with fixed with, min-height 100%

    - by laurent
    In a nutshell I need a CSS solution for the following requirements: Layout: One centered column with fixed width and a minimum height of 100% Two vertically repeated background images behind the centered column, one aligned to the left, one aligned to the right Cross browser compatibility A little more details Today a new requirement for my current web site project came up: A background image with gradients on the left and right side. The challenge is now to specify two different background images while keeping the rest of the layout spec. Unfortunately the (simple) layout somehow doesn't go with the two backgrounds. My layout is basically one centered column with fixed width: #main_container { margin: 0 auto; min-height: 100%; width: 800px; } Furthermore it's necessary to stretch the column to a minimum height of 100%, since there are quite some pages with only little content. The following CSS styles take care of that: html { height: 100%; } body { margin: 0; height: 100%; padding: 0; } So far so good - until the two background image issue arrived... I tried the following solutions Two absolute positioned divs behind the main container One image defined with the body, one with the html CSS class One image defined with the body, the other one with a large div begind the main container With either one of them, the dynamic height solution was ruined. Either the main container didn't stretch to 100% when it was too small, or the background remained at 100% when the content was actually longer

    Read the article

  • SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28

    - by pinaldave
    I have been working a lot on Wait Stats and Wait Types recently. Last Year, I requested blog readers to send me their respective server’s wait stats. I appreciate their kind response as I have received  Wait stats from my readers. I took each of the results and carefully analyzed them. I provided necessary feedback to the person who sent me his wait stats and wait types. Based on the feedbacks I got, many of the readers have tuned their server. After a while I got further feedbacks on my recommendations and again, I collected wait stats. I recorded the wait stats and my recommendations and did further research. At some point at time, there were more than 10 different round trips of the recommendations and suggestions. Finally, after six month of working my hands on performance tuning, I have collected some real world wisdom because of this. Now I plan to share my findings with all of you over here. Before anything else, please note that all of these are based on my personal observations and opinions. They may or may not match the theory available at other places. Some of the suggestions may not match your situation. Remember, every server is different and consequently, there is more than one solution to a particular problem. However, this series is written with kept wait stats in mind. While I was working on various performance tuning consultations, I did many more things than just tuning wait stats. Today we will discuss how to capture the wait stats. I use the script diagnostic script created by my friend and SQL Server Expert Glenn Berry to collect wait stats. Here is the script to collect the wait stats: -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS (SELECT wait_type, wait_time_ms / 1000. AS wait_time_s, 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct, ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE','SLEEP_TASK' ,'SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR', 'LOGMGR_QUEUE','CHECKPOINT_QUEUE' ,'REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH','BROKER_TASK_STOP','CLR_MANUAL_EVENT' ,'CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT' ,'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 99 OPTION (RECOMPILE); -- percentage threshold GO This script uses Dynamic Management View sys.dm_os_wait_stats to collect the wait stats. It omits the system-related wait stats which are not useful to diagnose performance-related bottleneck. Additionally, not OPTION (RECOMPILE) at the end of the DMV will ensure that every time the query runs, it retrieves new data and not the cached data. This dynamic management view collects all the information since the time when the SQL Server services have been restarted. You can also manually clear the wait stats using the following command: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR); Once the wait stats are collected, we can start analysis them and try to see what is causing any particular wait stats to achieve higher percentages than the others. Many waits stats are related to one another. When the CPU pressure is high, all the CPU-related wait stats show up on top. But when that is fixed, all the wait stats related to the CPU start showing reasonable percentages. It is difficult to have a sure solution, but there are good indications and good suggestions on how to solve this. I will keep this blog post updated as I will post more details about wait stats and how I reduce them. The reference to Book On Line is over here. Of course, I have selected February to run this Wait Stats series. I am already cheating by having the smallest month to run this series. :) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Blogging: MacJournal & Windows Live Writer

    - by Jeff Julian
    One thing I have learned about using a Mac is that Apple does not produce very many free applications. The ones they do are typically not full featured and to get the full feature you need to buy their upgraded version. For example, when it comes to Photo editing and cataloging, iPhoto is not a solution for large files or RAW processing, you need Aperture which is a couple hundred dollars. I am not complaining because I like it when an application has a product team who generates revenue with it, because the chance of them being around longer seems to be higher. What is my point in all of this? Apple does not produce a product for blogging/journaling like Microsoft does with Windows Live Writer. I love Windows Live Writer. If you are on a Windows box, it is a required tool in your toolbox if you publish to a blog. The cleanness of the interface, integration with most blog APIs and ability to Save Local or Publish as a Draft make capturing your thoughts for publishing now or later a very easy task. My hope is that Microsoft will port it to the Mac, but I don’t believe that will ever happen as it is not a revenue generating product and Microsoft doesn’t often port to a Mac besides Remote Desktop Connection and MSN Messenger. For my configuration I used to use only Boot Camp on my two MacBook Pros I have owned in the past three years because I’m a PC, but after four different rebuilds (not typically due to Windows, but Boot Camp or Parallels) I decided to move off the Boot Camp platform and to VMWare Fusion. This is a complete separate blog post that I should spec out in MacJournal, but I now always boot into the Mac OS and use Fusion for my AJI Software VM or my client’s VMs. It just seems to work better for me and I have a very nice way to backup my Windows environments with VMWare.Needless to say, there was need in my new laptop configuration for a blogging tool that worked natively on a Mac. I don’t like to power up my machine for writing a document or working on an image and need to boot up a VM just so I can use Windows. Some would say why not just use a Windows laptop and put the MBP on eBay? It is just a preference and right now, I like the Mac OS for day to day work. So in comes MacJournal, part of the current MacHeist package for $19.95 (MacJournal is normally $39.95). This product is definitely not WLW, but WLW is missing some features I like in MacJournal. I hope the price point comes down on MacJournal cause I could see paying $19.95 for it, but it is always hard for me to buy a piece of software for $39.95 when I can use something else. But I am a cheapskate when it comes to software packages. I suggest if you are using a Mac to drop what you are doing pick up the MacHeist bundle today before it is over, but if you are reading this later, than download the trial and see if MacJournal is a solution for you. If you have any other suggestions that are as nice or cheaper, please comment.Product LinksMacJournal by Mariners Software $39.95 (part of MacHeist bundle for $19.95 with only one day left)Windows Live Writer by MicrosoftThis post was created using MacJournal.[Update: The joys of formatting. Make sure if you are a Geekswithblogs.net member that you use this configuration to setup the Metablog formatting of paragraphs correctly]

    Read the article

  • How to configure Visual Studio 2010 code coverage for ASP.NET MVC unit tests

    - by DigiMortal
    I just got Visual Studio 2010 code coverage work with ASP.NET MVC application unit tests. Everything is simple after you have spent some time with forums, blogs and Google. To save your valuable time I wrote this posting to guide you through the process of making code coverage work with ASP.NET MVC application unit tests. After some fighting with Visual Studio I got everything to work as expected. I am still not very sure why users must deal with this mess, but okay – I survived it. Before you start configuring Visual Studio I expect your solution meets the following needs: there are at least one library that will be tested, there is at least on library that contains tests to be run, there are some classes and some tests for them, and, of course, you are using version of Visual Studio 2010 that supports tests (I have Visual Studio 2010 Ultimate). Now open the following screenshot to separate windows and follow the steps given below. Visual Studio 2010 Test Settings window. Click on image to see it at original size.  Double click on Local.testsettings under Solution Items. Test settings window will be opened. Select “Data and Diagnostics” from left pane. Mark checkboxes “ASP.NET Profiler” and “Code Coverage”. Move cursor to “Code Coverage” line and press Configure button or make double click on line. Assemblies selection window will be opened. Mark checkboxes that are located before assemblies about what you want code coverage reports and apply settings. Save your project and close Visual Studio. Run Visual Studio as Administrator and run tests. NB! Select Test => Run => Tests in Current Context from menu. When tests are run you can open code coverage results by selecting Test => Windows => Code Coverage Results from menu. Here you can see my example test results. Visual Studio 2010 Test Results window. All my tests passed this time. :) Click on image to see it at original size.  And here are the code coverage results. Visual Studio 2101 Code Coverage Results. I need a lot more tests for sure. Click on image to see it at original size.  As you can see everything was pretty simple. But it took me sometime to figure out how to get everything work as expected. Problems? You may face some problems when making code coverage work. Here is my short list of possible problems. Make sure you have all assemblies available for code coverage. In some cases it needs more libraries to be referenced as you currently have. By example, I had to add some more Enterprise Library assemblies to my project. You can use EventViewer to discover errors that where given during testing. Make sure you selected all testable assemblies from Code Coverage settings like shown above. Otherwise you may get empty results. Tests with code coverage are slower because we need ASP.NET profiler. If your machine slows down then try to free more resources.

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >