Search Results

Search found 4884 results on 196 pages for 'ad hoc distribution'.

Page 177/196 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Why use Entity Framework over Linq2SQL if...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason that I should use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • How can I use Perl to determine whether the contents of two files are identical?

    - by Zaid
    This question comes from a need to ensure that changes I've made to code doesn't affect the values it outputs to text file. Ideally, I'd roll a sub to take in two filenames and return 1or return 0 depending on whether the contents are identical or not, whitespaces and all. Given that text-processing is Perl's forté, it should be quite easy to compare two files and determine whether they are identical or not (code below untested). use strict; use warnings; sub files_match { my ( $fileA, $fileB ) = @_; open my $file1, '<', $fileA; open my $file2, '<', $fileB; while (my $lineA = <$file1>) { next if $lineA eq <$file2>; return 0 and last; } return 1; } The only way I can think of (sans CPAN modules) is to open the two files in question, and read them in line-by-line until a difference is found. If no difference is found, the files must be identical. But this approach is limited and clumsy. What if the total lines differ in the two files? Should I open and close to determine line count, then re-open to scan the texts? Yuck. I don't see anything in perlfaq5 relating to this. I want to stay away from modules unless they come with the core Perl 5.6.1 distribution.

    Read the article

  • Why can't we just use a hash of passphrase as the encryption key (and IV) with symmetric encryption algorithms?

    - by TX_
    Inspired by my previous question, now I have a very interesting idea: Do you really ever need to use Rfc2898DeriveBytes or similar classes to "securely derive" the encryption key and initialization vector from the passphrase string, or will just a simple hash of that string work equally well as a key/IV, when encrypting the data with symmetric algorithm (e.g. AES, DES, etc.)? I see tons of AES encryption code snippets, where Rfc2898DeriveBytes class is used to derive the encryption key and initialization vector (IV) from the password string. It is assumed that one should use a random salt and a shitload of iterations to derive secure enough key/IV for the encryption. While deriving bytes from password string using this method is quite useful in some scenarios, I think that's not applicable when encrypting data with symmetric algorithms! Here is why: using salt makes sense when there is a possibility to build precalculated rainbow tables, and when attacker gets his hands on hash he looks up the original password as a result. But... with symmetric data encryption, I think this is not required, as the hash of password string, or the encryption key, is never stored anywhere. So, if we just get the SHA1 hash of password, and use it as the encryption key/IV, isn't that going to be equally secure? What is the purpose of using Rfc2898DeriveBytes class to generate key/IV from password string (which is a very very performance-intensive operation), when we could just use a SHA1 (or any other) hash of that password? Hash would result in random bit distribution in a key (as opposed to using string bytes directly). And attacker would have to brute-force the whole range of key (e.g. if key length is 256bit he would have to try 2^256 combinations) anyway. So either I'm wrong in a dangerous way, or all those samples of AES encryption (including many upvoted answers here at SO), etc. that use Rfc2898DeriveBytes method to generate encryption key and IV are just wrong.

    Read the article

  • What is wrong with this php loop?

    - by Mark R
    I made a loop but it doesn't work. here's what I did: <?php if(is_tree('4')) { ?> <?php $show_after_p = 2; $content = apply_filters('the_content', $post->post_content); if(substr_count($content, '<p>') > $show_after_p) { $contents = explode("</p>", $content); $p_count = 1; foreach($contents as $content) { echo $content; if($p_count == $show_after_p) { ?> YOUR AD CODE GOES HERE < ? } echo "</p>"; $p_count++; } } ?> <?php else : ?> <?php the_content('<p class="serif">Read the rest of this page &raquo;</p>'); ?> <?php endif; } ?> I need to make it work but don't know how. I'm guessing it's a simple syntax error I'm not seeing?

    Read the article

  • Split a string by comma, quote and full-stop.. with a few exceptions

    - by dunc
    I've got a lot of text, similar to the following paragraph, which I'd like to split into words without punctuation (', ", ,, ., newline etc).. with a few exceptions. Initially considered endemic to the Chalakudy River system in Kerala state, southern India, but now recognised to have a wider distribution in surrounding drainages including the Periyar, Manimala, and Pamba river though the Manimala data may be questionable given it seems to be the type locality of P. denisonii. In the Achankovil River basin it occurs sympatrically, and sometimes syntopically, with P. denisonii. Wild stocks may have dwindled by as much as 50% in the last 15 years or so with collection for the aquarium trade largely held responsible although habitats are also being degraded by pollution from agricultural and domestic sources, plus destructive fishing methods involving explosives or organic toxins. The text refers to P. denisonii which is a species of fish. It's an abbreviation of Genus species. I would like this reference to be one word. So, for instance, this is the kind of array I'd like to see: Array ( ... [44] given [45] it [46] seems [47] to [48] be [49] the [50] type [51] locality [52] of [53] P. denisonii [54] In [55] the ... ) The only things that distinguish these species references such as P. denisonii from a new sentence like end. New are: The P (for Puntius, as in the P. in the aforementioned example) is only ever one letter, always a capital the d (as in . denisonii) is always either a lower case letter or an apostrophe (') What regexp can I use with preg_split to give me such an array? I've tried a simple explode( " ", $array ) but it doesn't do the job at all. Thanks in advance,

    Read the article

  • License For (Mostly) Open Source Website / Service

    - by Ryan Sullivan
    I have an interesting setup and am not sure how to license a website. I know this is not legal advice, and I am not asking for any. There are so many different Open Source Licenses and I do not have the time to read every last one to see which best fits my situation. Really, I am looking for suggestions and a nudge in the right direction. My setup is: I give away for free version of my web service with a clean website interface. The implementation I use in the actual web site is (almost) identical to what I give away. The main service works the exactly the same way, but the website interface to manage features in the service is fairly different. Really the web interfaces have the same exact backend, and the front ends accomplish the same tasks, but the service I offer on my site is very rich and uses a good deal of javascript, where I kept the interface in the version I give away as simple and javascript-less as possible. Mostly so it is easy to understand and integrate into other sites. I am not entirely sure how I should license this. It is more like I develop an open source service but have a separate site built upon it. I like the GPLv3 but I am not sure if I can use it in this case especially since I am making some money off of google ad's on the site and plan on using amazon affiliates as well. Any help would be greatly appreciated. I do want to open it up as much as possible. But I still want to be able to continue with my own implementation. Thanks in advance for any information or help anyone can provide.

    Read the article

  • Jquery - Setting form select-field "selected" value only works after refresh

    - by frequent
    Hi, I want to use a form select-field for users to select their country of residence, which shows the IP based country as default value (plus the IP address next to it); Location info (ipinfodb.com) is working. I'm passing "country" and "ip" to the function, which should modify select-field and ip adress Problem: IP adress works, but the select-field only updates after I hit refresh. Can someone tell me why? Here is the code: HTML <select name="setup_changeCountry" id="setup_changeCountry"> <option value="AL-Albania">Albanien</option> <option value="AD-Andorra">Andorra</option> <option value="AM-Armenia">Armenien</option> <option value="AU-Australia">Australien</option> ... </select> <div class="setup_IPInfo"> <span>Your IP</span> <span class="ipAdress"> -- ip --</span> </div> Javascript/Jquery function morph(country,ip) >> passed from function, called on DOMContentLoaded { var ipAdress = ip; $('.ipAdress').text(ipAdress); var countryForm = country; $('#setup_changeCountry option').each(function() { if ($(this).val().substr(0,2) == countryForm) { $(this).attr('selected', "true"); } }); } Thanks for any clues on how to fix this. Frequent

    Read the article

  • Setting pixel values in Nvidia NPP ImageCPU objects?

    - by solvingPuzzles
    In the Nvidia Performance Primitives (NPP) image processing examples in the CUDA SDK distribution, images are typically stored on the CPU as ImageCPU objects, and images are stored on the GPU as ImageNPP objects. boxFilterNPP.cpp is an example from the CUDA SDK that uses these ImageCPU and ImageNPP objects. When using a filter (convolution) function like nppiFilter, it makes sense to define a filter as an ImageCPU object. However, I see no clear way setting the values of an ImageCPU object. npp::ImageCPU_32f_C1 hostKernel(3,3); //allocate space for 3x3 convolution kernel //want to set hostKernel to [-1 0 1; -1 0 1; -1 0 1] hostKernel[0][0] = -1; //this doesn't compile hostKernel(0,0) = -1; //this doesn't compile hostKernel.at(0,0) = -1; //this doesn't compile How can I manually put values into an ImageCPU object? Notes: I didn't actually use nppiFilter in the code snippet; I'm just mentioning nppiFilter as a motivating example for writing values into an ImageCPU object. The boxFilterNPP.cpp example doesn't involve writing directly to an ImageCPU object, because nppiFilterBox is a special case of nppiFilter that uses a built-in gaussian smoothing filter (probably something like [1 1 1; 1 1 1; 1 1 1]).

    Read the article

  • Strange code behaviour?

    - by goldenmean
    Hi, I have a C code in which i have a structure declaration which has an array of int[576] declared in it. For some reason, i had to remove this array from the structure, So i replaced this array with a pointer as int *ptr; declared some global array of same type, somewhere else in the code, and initialized this pointer by assigning the global array to this pointer. So i did not have to change the way i was accessing this array, from other parts of my code. But it works fine/gives desired output when i have the array declared in the structure, but it gives junk output when i declare it as a pointer in the structure and assign a global array to this pointer, as a part of the pointer initialization. All this code is being run on MS-VC 6.0/Windows setup/Intel-x86. I tried below things: 1)Suspected structure padding/alignment but could not get any leads? If at all structure alignment could be a culprit how can i proceed to narrow it down and confirm it? 2) I have made sure that in both cases the array is initialized to some default values, say 0 before its first use, and its not being used before initialization. 3)I tried using global array as well as malloc based memory for this newly declared array. Same result, junk output. Am i missing something? How can i zero down the problem. Any pointers would be helpful. Thanks, -AD.

    Read the article

  • Fastest method to define whether a number is a triangular number

    - by psihodelia
    A triangular number is the sum of the n natural numbers from 1 to n. What is the fastest method to find whether a given positive integer number is a triangular one? I suppose, there must be a hidden pattern in a binary representation of such numbers (like if you need to find whether a number is even/odd you check its least significant bit). Here is a cut of the first 1200th up to 1300th triangular numbers, you can easily see a bit-pattern here (if not, try to zoom out): (720600, '10101111111011011000') (721801, '10110000001110001001') (723003, '10110000100000111011') (724206, '10110000110011101110') (725410, '10110001000110100010') (726615, '10110001011001010111') (727821, '10110001101100001101') (729028, '10110001111111000100') (730236, '10110010010001111100') (731445, '10110010100100110101') (732655, '10110010110111101111') (733866, '10110011001010101010') (735078, '10110011011101100110') (736291, '10110011110000100011') (737505, '10110100000011100001') (738720, '10110100010110100000') (739936, '10110100101001100000') (741153, '10110100111100100001') (742371, '10110101001111100011') (743590, '10110101100010100110') (744810, '10110101110101101010') (746031, '10110110001000101111') (747253, '10110110011011110101') (748476, '10110110101110111100') (749700, '10110111000010000100') (750925, '10110111010101001101') (752151, '10110111101000010111') (753378, '10110111111011100010') (754606, '10111000001110101110') (755835, '10111000100001111011') (757065, '10111000110101001001') (758296, '10111001001000011000') (759528, '10111001011011101000') (760761, '10111001101110111001') (761995, '10111010000010001011') (763230, '10111010010101011110') (764466, '10111010101000110010') (765703, '10111010111100000111') (766941, '10111011001111011101') (768180, '10111011100010110100') (769420, '10111011110110001100') (770661, '10111100001001100101') (771903, '10111100011100111111') (773146, '10111100110000011010') (774390, '10111101000011110110') (775635, '10111101010111010011') (776881, '10111101101010110001') (778128, '10111101111110010000') (779376, '10111110010001110000') (780625, '10111110100101010001') (781875, '10111110111000110011') (783126, '10111111001100010110') (784378, '10111111011111111010') (785631, '10111111110011011111') (786885, '11000000000111000101') (788140, '11000000011010101100') (789396, '11000000101110010100') (790653, '11000001000001111101') (791911, '11000001010101100111') (793170, '11000001101001010010') (794430, '11000001111100111110') (795691, '11000010010000101011') (796953, '11000010100100011001') (798216, '11000010111000001000') (799480, '11000011001011111000') (800745, '11000011011111101001') (802011, '11000011110011011011') (803278, '11000100000111001110') (804546, '11000100011011000010') (805815, '11000100101110110111') (807085, '11000101000010101101') (808356, '11000101010110100100') (809628, '11000101101010011100') (810901, '11000101111110010101') (812175, '11000110010010001111') (813450, '11000110100110001010') (814726, '11000110111010000110') (816003, '11000111001110000011') (817281, '11000111100010000001') (818560, '11000111110110000000') (819840, '11001000001010000000') (821121, '11001000011110000001') (822403, '11001000110010000011') (823686, '11001001000110000110') (824970, '11001001011010001010') (826255, '11001001101110001111') (827541, '11001010000010010101') (828828, '11001010010110011100') (830116, '11001010101010100100') (831405, '11001010111110101101') (832695, '11001011010010110111') (833986, '11001011100111000010') (835278, '11001011111011001110') (836571, '11001100001111011011') (837865, '11001100100011101001') (839160, '11001100110111111000') (840456, '11001101001100001000') (841753, '11001101100000011001') (843051, '11001101110100101011') (844350, '11001110001000111110') For example, can you also see a rotated normal distribution curve, represented by zeros between 807085 and 831405?

    Read the article

  • Which Android hardware devices should I test on? [closed]

    - by Tchami
    Possible Duplicate: What hardware devices do you test your Android apps on? I'm trying to compile a list of Android hardware devices that it would make sense to buy and test against if you want to target an as broad audience as possible, while still not buying every single Android device out there. I know there's a lot of information regarding screen sizes and Android versions available elsewhere, but: when developing for Android it's not terribly useful to know if the screen size of a device is 480x800 or 320x240, unless you feel like doing the math to convert that into Android "units" (i.e. small, normal, large or xlarge screens, and ldpi, mdpi, hdpi or xhdpi densities). Even knowing the dimensions of a device, you cannot be sure of the actual Android units as there's some overlap, see Range of screens supported in the Android documentation Taking into account the distribution of Platform versions and Screen Sizes and Densities, below is my current list based on information from the Wikipedia article on Comparison of Android devices. I'm fairly sure the information in this list is correct, but I'd welcome any suggestions/changes. Phones | Model | Android Version | Screen Size | Density | | HTC Wildfire | 2.1/2.2 | Normal | mdpi | | HTC Tattoo | 1.6 | Normal | mdpi | | HTC Hero | 2.1 | Normal | mdpi | | HTC Legend | 2.1 | Normal | mdpi | | Sony Ericsson Xperia X8 | 1.6/2.1 | Normal | mdpi | | Motorola Droid | 2.0-2.2 | Normal | hdpi | | Samsung Galaxy S II | 2.3 | Normal | hdpi | | Samsung Galaxy Nexus | 4.0 | Normal | xhdpi | | Samsung Galaxy S III | 4.0 | Normal | xhdpi | **Tablets** | Model | Android Version | Screen Size | Density | | Samsung Galaxy Tab 7" | 2.2 | Large | hdpi | | Samsung Galaxy Tab 10" | 3.0 | X-Large | mdpi | | Asus Transformer Prime | 4.0 | X-Large | mdpi | | Motorola Xoom | 3.1/4.0 | X-Large | mdpi | N.B.: I have seen (and read) other posts on SO on this subject, e.g. Which Android devices should I test against? and What hardware devices do you test your Android apps on? but they don't seem very canonical. Maybe this should be marked community wiki?

    Read the article

  • How is this function being made use of?

    - by Kay
    Hello all, I am just studying a few classes given to me by my lecturer and I can't understand how the function heapRebuild is being made used of! It doesn't change any global variables and it doesn't print out anything ad it doesn't return anything - so should this even work? It shouldn't, should it? If you were told to make use of heapRebuild to make a new function removeMac would you edit heapRebuild? public class MaxHeap<T extends Comparable<T>> implements Heap<T>{ private T[] heap; private int lastIndex; public T removeMax(){ T rootItem = heap[0]; heap[0] = heap[lastIndex-1]; lastIndex--; heapRebuild(heap, 0, lastIndex); return rootItem; } protected void heapRebuild(T[ ] items, int root, int size){ int child = 2*root+1; if( child < size){ int rightChild = child+1; if ((rightChild < size) && (items[rightChild].compareTo(items[child]) > 0)){ child = rightChild; } if (items[root].compareTo(items[child]) < 0){ T temp = items[root]; items[root] = items[child]; items[child] = temp; heapRebuild(items, child, size);} } } }

    Read the article

  • How should I solve this MySql problem (PHP) ? (Beginner)

    - by Camran
    I have several tables in a MySql database. I have a classifieds website, and at the bottom I display the users last visited classifieds. I do this by storing the ID:s of the ads to an array in the cookie. Now, my db is made up like this kindof: Main Table: // Stores global information, ie these fields have to be filled out in every record, never be blank ID Price category Seller Item Table: // Stores descriptive info about whats for sale ID AD_ID (FK) //This is the same as ID in the MAIN TABLE Color Size Mileage etc My problem is that I need to know what category the ad is in, in order to query mysql for the right information I think. So I need two variables, but the cookie only has one (ID) stored. Offcourse I could make two queries, first one just matching the ID to the main_table and fetch the category from the Main_table. Then make the second query and fetch all other info from the right table. Here is an example if the category was Vehicles: SELECT * FROM main_table, vehicles_table, WHERE main_table.id=$id_from_cookie AND main_table.ad_id=vehicles_table.ad_id As you can see above, I need the category to write in what table to check, right? But I think there must be a smarter way, like fetching them in one single query using only one variable (id from cookie)? How should I do this? Understand? Let me know if you need more input... Thanks

    Read the article

  • Why does this sql statement keep saying it is a boolean and not a parameter? (php/Mysql)

    - by ggfan
    In this statement, I am trying to see if there if the latest posting in the database that has the exact same title, price, city, state, detail. If there is, then it would say to the user that the exact post has been already made; if not then insert the posting into the dbc. (This is one type of check so that users can't accidentally post twice. This may not be the best check, but this statement error is annoying me, so I want it to work :)) Why won't this sql work? I think it's not letting the title=$title and not getting the value in the $title... ERROR: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given in postad.php on line 365 //there is a form that users fill out that has title, price, city, etc <form> blah blah </form> //if users click submit, then does all the checks and if all okay, insert to dbc if (isset($_POST['submit'])) { // Grab the pposting data from the POST and gets rid of any funny stuff $title = mysqli_real_escape_string($dbc, trim($_POST['title'])); $price = mysqli_real_escape_string($dbc, trim($_POST['price'])); $city = mysqli_real_escape_string($dbc, trim($_POST['city'])); $state = mysqli_real_escape_string($dbc, trim($_POST['state'])); $detail = mysqli_real_escape_string($dbc, trim($_POST['detail'])); if (!is_numeric($price) && !empty($price)) { echo "<p class='error'>The price can only be numbers. No special characters, etc</p>"; } //Error problem...won't let me set title=$title, detail=$detail, etc. //this statement after all the checks so that none of the variables are empty $query="Select * FROM posting WHERE user_id={$_SESSION['user_id']} AND title=$title AND price=$price AND city=$city AND state=$state AND detail=$detail"; $data = mysqli_query($dbc, $query); if(mysqli_num_rows($data)==1) { echo "You already posted this ad. Most likely caused by refreshing too many times."; } }

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

  • Learning content for MCSDs: Web Applications and Windows Store Apps using HTML5

    Recently, I started again to learn for various Microsoft certifications. First candidate on my way to MSCD: Web Applications is the Exam 70-480: Programming in HTML5 with JavaScript and CSS3. Motivation to go for a Microsoft exam I guess, this is quite personal but let me briefly describe my intentions to go that exam. First, I'm doing web development since the 1990's. Working with HTML, CSS and Javascript is happening almost daily in my workspace. And honestly, I do not only do 'pure' web development but already integrated several HTML/CSS/Javascript frontend UIs into an existing desktop application (written in Visual FoxPro) inclusive two-way communication and data exchange. Hm, might be an interesting topic for another blog article here... Second, this exam has a very interesting aspect which is listed at the bottom of the exam's details: Credit Toward Certification When you pass Exam 70-480: Programming in HTML5 with JavaScript and CSS3, you complete the requirements for the following certification(s): Programming in HTML5 with JavaScript and CSS3 Specialist Exam 70-480: Programming in HTML5 with JavaScript and CSS3: counts as credit toward the following certification(s): MCSD: Web Applications MCSD: Windows Store Apps using HTML5 So, passing one single exam will earn you specialist certification straight-forward, and opens the path to higher levels of certifications. Preparations and learning path Well, due to a newsletter from Microsoft Learning (MSL) I caught interest in picking up the circumstances and learning materials for this particular exam. As of writing this article there is a promotional / voucher code available which enables you to register for this exam for free! Simply register yourself with or log into your existing account at Prometric, choose the exam for a testing facility near to you and enter the voucher code HTMLJMP (available through 31.03.2013 or while supplies last). Hurry up, there are restrictions... As stated above, I'm already very familiar with web development and the programming flavours involved into this. But of course, it is always good to freshen up your knowledge and reflect on yourself. Microsoft is putting a lot of effort to attract any kind of developers into the 'App Development'. Whether it is for the Windows 8 Store or the Windows Phone 8 Store, doesn't really matter. They simply need more apps. This demand for skilled developers also comes with a nice side-effect: Lots and lots of material to study. During the first couple of hours, I could easily gather high quality preparation material - again for free! Following is just a small list of starting points. If you have more resources, please drop me a message in the comment section, and I'll be glad to update this article accordingly. Developing HTML5 Apps Jump Start This is an accelerated jump start video course on development of HTML5 Apps for Windows 8. There are six modules that are split into two video sessions per module. Very informative and intense course material. This is packed stuff taken from an official preparation course for exam 70-480. Developing Windows Store Apps with HTML5 Jump Start Again, an accelerated preparation video course on Windows 8 Apps. There are six modules with two video sessions each which will catapult you to your exam. This is also related to preps for exam 70-481. Programming Windows 8 Apps with HTML, CSS, and JavaScript Kraig Brockschmidt delves into the ups and downs of Windows 8 App development over 800+ pages. Great eBook to read, study, and to practice the samples - best of all, it's for free. codeSHOW() This is a Windows 8 HTML/JS project with the express goal of demonstrating simple development concepts for the Windows 8 platform. Code, code and more code... absolutely great stuff to study and practice. Microsoft Virtual Academy I already wrote about the MVA in a previous article. Well, if you haven't registered yourself yet, now is the time. The list is not complete for sure, but this might keep you busy for at least one or even two weeks to go through the material. Please don't hesitate to add more resources in the comment section. Right now, I'm already through all videos once, and digging my way through chapter 4 of Kraig's book. Additional material - Pluralsight Apart from those free online resources, I also following some courses from the excellent library of Pluralsight. They already have their own section for Windows 8 development, but of course, you get companion material about HTML5, CSS and Javascript in other sections, too. Introduction to Building Windows 8 Applications Building Windows 8 Applications with JavaScript and HTML Selling Windows 8 Apps HTML5 Fundamentals Using HTML5 and CSS3 HTML5 Advanced Topics CSS3 etc... Interesting to see that Michael Palermo provides his course material on multiple platforms. Fantastic! You might also pay a visit to his personal blog. Hm, it just came to my mind that Aaron Skonnard of Pluralsight publishes so-called '24 hours Learning Paths' based on courses available in the course library. Would be interested to see a combination for Windows 8 App development using HTML5, CSS3 and Javascript in the future. Recommended workspace environment Well, you might have guessed it but this requires Windows 8, Visual Studio 2012 Express or another flavour, and a valid Developers License. Due to an MSDN subscription I working on VS 2012 Premium with some additional tools by Telerik. Honestly, the fastest way to get you up and running for Windows 8 App development is the source code archive of codeSHOW(). It does not only give you all source code in general but contains a couple of SDKs like Bing Maps, Microsoft Advertising, Live ID, and Telerik Windows 8 controls... for free! Hint: Get the Windows Phone 8 SDK as well. Don't worry, while you are studying the material for Windows 8 you will be able to leverage from this knowledge to development for the phone platform, too. It takes roughly one to two hours to get your workspace and learning environment, at least this was my time frame due to slow internet connection and an aged spare machine. ;-) Oh, before I forget to mention it, as soon as you're done, go quickly to the Windows Store and search for ClassBrowserPlus. You might not need it ad hoc for your development using HTML5, CSS and Javascript but I think that it is a great developer's utility that enables you to view the properties, methods and events (along with help text) for all Windows 8 classes. It's always good to look behind the scenes and to explore how it is made. Idea: Start/join a learning group The way you learn new things or intensify your knowledge in a certain technology is completely up to your personal preference. Back in my days at the university, we used to meet once or twice a week in a small quiet room to exchange our progress, questions and problems we ran into. In general, I recommend to any software craftsman to lift your butt and get out to exchange with other developers. Personally, I like this approach, as it gives you new points of view and an insight into others' own experience with certain techniques and how they managed to solve tricky issues. Just keep it relaxed and not too formal after all, and you might a have a good time away from your dull office desk. Give your machine a break, too.

    Read the article

  • Thoughts on Build 2013

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/30/153294.aspxAnd so another Build conference has come to an end. Below are my thoughts/perspectives on various aspects of the event. I’ll do a separate blog post on my thoughts of the Build message for developers. The Good Moscone center was a great venue for Build! Easy to get around, easy to get to, and well maintained, it was a very comfortable conference venue. Yeah, the free swag was nice. Build has built up an expectation that attendees will always get something; it’ll be interesting to see how Microsoft maintains this expectation over the next few Build events. I still maintain that free swag should never be the main reason one attends an event, and for me this was definitely just an added bonus. I’m planning on trying to use the Surface as a dedicated 2nd device at work for meetings, I’ll share my experiences over the next few months. The hackathon event was a great idea, although personally I couldn’t justify spending the money on a conference registration just to spend the entire conference coding. Still, the apps that were created were really great and there was a lot of passion and excitement around the hackathon. I wonder if they couldn’t have had the hackathon on the Monday/Tuesday for those that wanted to participate so they didn’t miss any of the actual conference over Wed/Thurs. San Francisco was a great city to host Build. Getting from hotels to the conference center was very easy (well especially for me, I was only 3 blocks away) and the city itself felt very safe. However, if I never have to fly into SFO again I’ll be alright with that! Delays going into and out of SFO and both apparently were due to the airport itself. The Bad Build is one of those oddities on the conference landscape where people will pay to commit to attending an event without knowing anything about the sessions. We got our list of conference sessions when we registered on Tuesday, not before. And even then, we only got titles and not descriptions (those were eventually made available via the conference’s mobile application). I get it…they’re going to make announcements and they don’t want to give anything away through the session titles. But honestly, there wasn’t anything in the session titles that I would have considered a surprise. Breakfasts were brutal. High-carb pastries, donuts, and muffins with fruit and hard boiled eggs does not a conference breakfast make. I can’t believe that the difference between a continental breakfast per person and a hot breakfast buffet would have been a huge impact to a conference fee that was already around $2000. The vendor area was anemic. I don’t know why Microsoft forces the vendors into cookie-cutter booth areas (this year they were all made of plywood material). WPC, TechEd – booth areas there allow the vendors to be creative with their displays. Not so much for Build. Really odd was the lack of Microsoft’s own representation around Bing. In the day 1 keynote Microsoft made a big deal about Bing as an API. Yet there was nobody in the vendor area set up to provide more information or have discussions with about the Bing API. The Ugly Our name badges were NFC enabled. The purpose of this, beyond the vendors being able to scan your info, wasn’t really made clear. An attendee I talked to showed how you could get a reader app on your phone so you can scan other members cards and collect their contact info – which is a kewl idea; business cards are so 1990’s. But I was *shocked* at the amount of information that was on our name badges! Here’s what’s displayed on our name badge: - Name - Company - Twitter Handle I’m ok with that. But here’s what actually gets read: - Name - Company - Address Used for Registration - Phone Number Used for Registration So sharing that info with another attendee, they get way more of my info than just how to find me on Twitter! Microsoft, you need to fix this for the future. If vendors want to collect information on attendees, they should be able to collect an ID from the badge, then get a report with corresponding records afterwards. My personal information should not be so readily available, and without my knowledge! Final Verdict Maybe its my older age, maybe its where I’m at in life with family, maybe its where I’m at in my career, but when I consider whether a conference experience was valuable I get to the core reason I attend: opportunities to learn, opportunities to network, opportunities to engage with Microsoft. Opportunities to Learn:  Sessions I attended were generally OK, with some really stand out ones on Day 2. I would love to see Microsoft adopt the Dojo format for a portion of their sessions. Hands On Labs are dull, lecture style sessions are great for information sharing. But a guided hands-on coding session (Read: Dojo) provides the best of both worlds. Being that all content is publically available online to everyone (Build attendee or not), the value of attending the conference sessions is decreased. The value though is in the discussions that take part in person afterwards, which leads to… Opportunities to Network: I enjoyed getting together with old friends and connecting with Twitter friends in person for the first time. I also had an opportunity to meet total strangers. So from a networking perspective, Build was fantastic! I still think it would have been great to have an area for ad-hoc discussions – where speakers could announce they’d be available for more questions after their sessions, or attendees who wanted to discuss more in depth on a topic with other attendees could arrange space. Some people have no problems being outgoing and making these things happen, but others are not and a structured model is more attractive. Opportunities to Engage with Microsoft: Hit and miss on this one. Outside of the vendor area, unless you cornered or reached out to a speaker, there wasn’t any defined way to connect with blue badges. And as I mentioned above, Microsoft didn’t have full representation in the vendor area (no Bing). All in all, Build was a fun party where I was informed about some new stuff and got some free swag. Was it worth the time away from home and the hit to my PD budget? I’d say Somewhat. Build is a great informational conference, but I wouldn’t call it a learning conference. Considering that TechEd seems to be moving to more of an IT Pro focus, independent developer conferences seem to be the best value for those looking to learn and not just be informed. With the rapid development cycle Microsoft is embracing, we’re already seeing Build happening twice within a 12 month period. If that continues, the value of attending Build in person starts to diminish – especially with so much content available online. If Microsoft wants Build to be a must-attend event in the future, they need to start incorporating aspects of Tech Ed, past PDCs, and other conferences so those that want to leave with more than free swag have something to attract them.

    Read the article

  • Introduction to Human Workflow 11g

    - by agiovannetti
    Human Workflow is a component of SOA Suite just like BPEL, Mediator, Business Rules, etc. The Human Workflow component allows you to incorporate human intervention in a business process. You can use Human Workflow to create a business process that requires a manager to approve purchase orders greater than $10,000; or a business process that handles article reviews in which a group of reviewers need to vote/approve an article before it gets published. Human Workflow can handle the task assignment and routing as well as the generation of notifications to the participants. There are three common patterns or usages of Human Workflow: 1) Approval Scenarios: manage documents and other transactional data through approval chains . For example: approve expense report, vacation approval, hiring approval, etc. 2) Reviews by multiple users or groups: group collaboration and review of documents or proposals. For example, processing a sales quote which is subject to review by multiple people. 3) Case Management: workflows around work management or case management. For example, processing a service request. This could be routed to various people who all need to modify the task. It may also incorporate ad hoc routing which is unknown at design time. SOA 11g Human Workflow includes the following features: Assignment and routing of tasks to the correct users or groups. Deadlines, escalations, notifications, and other features required for ensuring the timely performance of a task. Presentation of tasks to end users through a variety of mechanisms, including a Worklist application. Organization, filtering, prioritization and other features required for end users to productively perform their tasks. Reports, reassignments, load balancing and other features required by supervisors and business owners to manage the performance of tasks. Human Workflow Architecture The Human Workflow component is divided into 3 modules: the service interface, the task definition and the client interface module. The Service Interface handles the interaction with BPEL and other components. The Client Interface handles the presentation of task data through clients like the Worklist application, portals and notification channels. The task definition module is in charge of managing the lifecycle of a task. Who should get the task assigned? What should happen next with the task? When must the task be completed? Should the task be escalated?, etc Stages and Participants When you create a Human Task you need to specify how the task is assigned and routed. The first step is to define the stages and participants. A stage is just a logical group. A participant can be a user, a group of users or an application role. The participants indicate the type of assignment and routing that will be performed. Stages can be sequential or in parallel. You can combine them to create any usage you require. See diagram below: Assignment and Routing There are different ways a task can be assigned and routed: Single Approver: task is assigned to a single user, group or role. For example, a vacation request is assigned to a manager. If the manager approves or rejects the request, the employee is notified with the decision. If the task is assigned to a group then once one of managers acts on it, the task is completed. Parallel : task is assigned to a set of people that must work in parallel. This is commonly used for voting. For example, a task gets approved once 50% of the participants approve it. You can also set it up to be a unanimous vote. Serial : participants must work in sequence. The most common scenario for this is management chain escalation. FYI (For Your Information) : task is assigned to participants who can view it, add comments and attachments, but can not modify or complete the task. Task Actions The following is the list of actions that can be performed on a task: Claim : if a task is assigned to a group or multiple users, then the task must be claimed first to be able to act on it. Escalate : if the participant is not able to complete a task, he/she can escalate it. The task is reassigned to his/her manager (up one level in a hierarchy). Pushback : the task is sent back to the previous assignee. Reassign :if the participant is a manager, he/she can delegate a task to his/her reports. Release : if a task is assigned to a group or multiple users, it can be released if the user who claimed the task cannot complete the task. Any of the other assignees can claim and complete the task. Request Information and Submit Information : use when the participant needs to supply more information or to request more information from the task creator or any of the previous assignees. Suspend and Resume :if a task is not relevant, it can be suspended. A suspension is indefinite. It does not expire until Resume is used to resume working on the task. Withdraw : if the creator of a task does not want to continue with it, for example, he wants to cancel a vacation request, he can withdraw the task. The business process determines what happens next. Renew : if a task is about to expire, the participant can renew it. The task expiration date is extended one week. Notifications Human Workflow provides a mechanism for sending notifications to participants to alert them of changes on a task. Notifications can be sent via email, telephone voice message, instant messaging (IM) or short message service (SMS). Notifications can be sent when the task status changes to any of the following: Assigned/renewed/delegated/reassigned/escalated Completed Error Expired Request Info Resume Suspended Added/Updated comments and/or attachments Updated Outcome Withdraw Other Actions (e.g. acquiring a task) Here is an example of an email notification: Worklist Application Oracle BPM Worklist application is the default user interface included in SOA Suite. It allows users to access and act on tasks that have been assigned to them. For example, from the Worklist application, a loan agent can review loan applications or a manager can approve employee vacation requests. Through the Worklist Application users can: Perform authorized actions on tasks, acquire and check out shared tasks, define personal to-do tasks and define subtasks. Filter tasks view based on various criteria. Work with standard work queues, such as high priority tasks, tasks due soon and so on. Work queues allow users to create a custom view to group a subset of tasks in the worklist, for example, high priority tasks, tasks due in 24 hours, expense approval tasks and more. Define custom work queues. Gain proxy access to part of another user's tasks. Define custom vacation rules and delegation rules. Enable group owners to define task dispatching rules for shared tasks. Collect a complete workflow history and audit trail. Use digital signatures for tasks. Run reports like Unattended tasks, Tasks productivity, etc. Here is a screenshoot of what the Worklist Application looks like. On the right hand side you can see the tasks that have been assigned to the user and the task's detail. References Introduction to SOA Suite 11g Human Workflow Webcast Note 1452937.2 Human Workflow Information Center Using the Human Workflow Service Component 11.1.1.6 Human Workflow Samples Human Workflow APIs Java Docs

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • RDS installation failure on 2012 R2 Server Core VM in Hyper-V Server

    - by Giles
    I'm currently installing a test-bed for my firms Infrastructure replacement. 10 or so Windows/Linux servers will be replaced by 2 physical servers running Hyper-V server. All services (DC, RDS, SQL) will be on Windows 2012 R2 Server Core VMs, Exchange on Server 2012 R2 GUI, and the rest are things like Elastix, MailArchiver etc, which aren't part of the equation thus far. I have installed Hyper-V server on a test box, and sucessfully got two virtual DC's running, SQL 2014 running, and 8.1 which I use for the RSAT tools. When trying to install RDS (The old fashioned kind, not the newer VDI(?) style), I get a failed installation due to the server not being able to reboot. A couple of articles have said not to do it locally, so I've moved on. Sitting at the Powershell prompt on the Domain Controller or SQL server (Both Server Core), I run the following commands: Import-Module RemoteDesktop New-SessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost "AlstersTS.Alsters.local" The installation begins, carries on for 2 or 3 minutes, then I receive the following error message: New-SessionDeployment : Validation failed for the "RD Connection Broker" parameter. AlstersTS.Alsters.local Unable to connect to the server by using WindowsPowerShell remoting. Verify that you can connect to the server. At line:1 char:1 + NewSessionDeployment -ConnectionBroker "AlstersTS.Alsters.local" -SessionHost " ... + + CategoryInfo : NotSpecified: (:) [Write-Error], WriteErrorException + FullyQualifiedErrorID : Microsoft.PowerShell.Commands.WriteErrorException,New-SessionDeployment So far, I have: Triple, triple checked syntax. Tried various other commands, and a script to accomplish the same task. Checked DNS is functioning as it should. Checked to the best of my knowledge that AD is working as it should. Checked that the Network Service has the needed permissions. Created another VM and placed the two roles on different servers. Deleted all VMs, started again with a new domain name (Lather, rinse, repeat) Performed the whole installation on a second physical box running Hyper-V Server Pleaded with it Interestingly, if I perform the installation via a GUI installation, the thing just works! Now I know I could convert this to a Server Core role after installation, but this wouldn't teach me what was wrong in the first instance. I've probably got 10 pages through various Google searches, each page getting a little less relevant. The closest matches seem to have good information, but it doesn't seem to be the fix for my set-up. As a side note, I expected to be able to "tee" or "out-file" the error message into a text file, but couldn't get that to work either, so I've typed in the error message manually. Chaps, any suggestions, from the glaringly obvious, to the long-winded and complex? Thanks!

    Read the article

  • PXE-E32 TFTP Open Timeout While Attempting to PXE Boot from Windows Deployment Services

    - by bschafer
    I'm running Windows Deployment Services on Windows Server 2008 R2 on top of an ESX 4.0 box. This is the only function of this VM instance, although it had previously functioned as an AD Domain Controller. My DHCP server is running on our primary Domain Controller, which is also Server 2008 R2, but running on metal. Everything was working perfectly until we recently had our backup generator fail during a power outage, causing all of our servers and networking equipment to lose power for a period of time. When we brought all of our equipment back up, everything was working as expected except for WDS. Our network is split up into several different vlans. Now, depending on which vlan the client computer is on, it's behaving differently when attempting to PXE boot into WDS. Our servers are located on the 10.55.x.x vlan, which, due to the nature of it, has no DHCP server active in it. The first computer we plugged in happened to be in the 10.99.x.x vlan, which is supposed to be reserved for network management devices (i.e. switches), but we've been using it occasionally otherwise. That computer gave us PXE-E11 ARP Timeout errors. When we moved to a different computer on the 10.19.x.x vlan (for general purpose use), it finally gets an IP from DHCP, but it presents us with a very stumping PXE-E32 TFTP Open Timeout error. Before the power outage, it didn't matter which vlan a device was on; it would PXE boot and image just fine. I've made no changes to anything server-side. Everything is configured exactly the same way it was on my WDS and DHCP servers as before the power outage. I've tried several different computers, including different models. All of this, combined with the quirky behavior depending on the vlan, makes me think something went wrong in one or more of our switches, probably because of the power outage. Unfortunately, I'm no network guy, and I know very little about how to configure our switches properly. Is this an issue with switches, etc? If so, how can I fix it? Is there some magical option I'm not aware of? Does anybody out there have any hunches? I've pretty much exhausted my ideas. Our main switch is an HP Procurve 5406. We also have 3x HP Procurve 4208 switches. The ESX Server is an HP ProLiant DL380 G6. The WDS VM is currently using the VMXNET3 network adaptor, but we've also tried the E1000 adaptor.

    Read the article

  • Microsoft CRM Dynamics Install

    - by Pino
    Trying to install CRM4.0 on Windows Small Business Server. We get through the isntall process (Setting up database, website, AD etc) when we clikc install though the following error is dumped to the log file, anyone point us in the right directon? 13:49:07| Info| Win32Exception: 1635 13:49:07| Error| Failed to revoke temporary database access.System.Data.SqlClient.SqlException: Cannot open database "MSCRM_CONFIG" requested by the login. The login failed. Login failed for user 'SCHIP\administrator'. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.Crm.CrmDbConnection.Open() at Microsoft.Crm.Setup.Database.SharedDatabaseUtility.DeleteDBUser(String sqlServerName, String databaseName, String user, CrmDBConnectionType connectionType) at Microsoft.Crm.Setup.Server.RevokeConfigDBDatabaseAccessAction.Do(IDictionary parameters) 13:49:07| Error| HResult=80004005 Install exception.System.Exception: Action Microsoft.Crm.Setup.Server.MsiInstallServerAction failed. --- System.ComponentModel.Win32Exception: This update package could not be opened. Verify that the update package exists and that you can access it, or contact the application vendor to verify that this is a valid Windows Installer update package at Microsoft.Crm.Setup.Common.Utility.MsiUtility.InstallProduct(String packagePath, String commandLine) at Microsoft.Crm.Setup.Server.MsiInstallServerAction.Do(IDictionary parameters) at Microsoft.Crm.Setup.Common.Action.ExecuteAction(Action action, IDictionary parameters, Boolean undo) --- End of inner exception stack trace --- at Microsoft.Crm.Setup.Common.Action.ExecuteAction(Action action, IDictionary parameters, Boolean undo) at Microsoft.Crm.Setup.Common.Installer.Install(IDictionary stateSaver) at Microsoft.Crm.Setup.Common.ComposedInstaller.InternalInstall(IDictionary stateSaver) at Microsoft.Crm.Setup.Common.ComposedInstaller.Install(IDictionary stateSaver) at Microsoft.Crm.Setup.Server.ServerSetup.Install(IDictionary data) at Microsoft.Crm.Setup.Server.ServerSetup.Run() 13:49:07| Info| Microsoft Dynamics CRM Server install Failed. 13:49:07| Info| Microsoft Dynamics CRM Server Setup did not complete successfully. Action Microsoft.Crm.Setup.Server.MsiInstallServerAction failed. This update package could not be opened. Verify that the update package exists and that you can access it, or contact the application vendor to verify that this is a valid Windows Installer update package

    Read the article

  • Tomcat SPNEGO authentication against Active Directory not working.

    - by Michael
    I'm trying to authenticate against AD using the http://spnego.sourceforge.net component with tomcat. I've created my SPN's "setspn.exe -A HTTP/servername SVCTomcat" & "setspn.exe -A HTTP/servername.fqdn.net SVCTomcat" I've created my krb5.conf & login.conf file and setup the filter in the web.xml ie. <filter-name>SpnegoHttpFilter</filter-name> <filter-class>net.sourceforge.spnego.SpnegoHttpFilter</filter-class> <param-name>spnego.allow.unsecure.basic</param-name> <param-value>false</param-value> <param-name>spnego.login.client.module</param-name> <param-value>spnego-client</param-value> <param-name>spnego.krb5.conf</param-name> <param-value>krb5.conf</param-value> <param-name>spnego.login.conf</param-name> <param-value>login.conf</param-value> <param-name>spnego.preauth.username</param-name> <param-value>SVCTomcat</param-value> <param-name>spnego.preauth.password</param-name> <param-value>Pasword</param-value> <param-name>spnego.login.server.module</param-name> <param-value>spnego-server</param-value> <param-name>spnego.prompt.ntlm</param-name> <param-value>false</param-value> <param-name>spnego.logger.level</param-name> <param-value>2</param-value> Note i've stripped extraneous tags from this, so it's not the actual XML. When i go to a page protected by this filter i get this in the catalina logfile. 25-Mar-2010 12:41:26 org.apache.catalina.startup.Catalina start INFO: Server startup in 4615 ms 25-Mar-2010 12:41:47 net.sourceforge.spnego.SpnegoHttpFilter doFilter FINE: principal=SYSTEM@TESTDOMAIN And in the hello_spnego.jsp example on the website it just reports the name of the user tomcat is running as (SYSTEM), not the user i'm connecting with. It seems the author stopped halfway through his debugging page, so i've no areas to look in other than to triple check my config. Any ideas?

    Read the article

  • What is the difference between Anycast and GeoDNS / GeoIP wrt HA?

    - by Riyad
    Based on the Wikipedia description of Anycast, it includes both the distribution of a domain-name-to-many-IP-mapping across many DNS servers as well as replying to clients with the most geographically close (or fastest) server. In the context of a globally distributed, highly available site like google.com (or any CDN service with many global edge locations) this sounds like the two key features one would need. DNS services like Amazon's Route53, EasyDNS and DNSMadeEasy all advertise themselves as Anycast-enabled networks. Therefore my assumption is that each of these DNS services transparently offer me those two killer features: multi-IP-to-domain mapping AND routing clients to the closest node. However, each of these services seem to separate out these two functionalities, referring to the 2nd one (routing clients to closest node) as "GeoDNS", "GeoIP" or "Global Traffic Director" and charge extra for the service. If a core tenant of an Anycast-capable system is to already do this, why is this functionality being earmarked as this extra feature? What is this "GeoDNS" feature doing that a standard Anycast DNS service won't do (according to the definition of Anycast from Wikipedia -- I understand what is being advertised, just not why it isn't implied already). I get extra-confused when a DNS service like Route53 that doesn't support this nebulous "GeoDNS" feature lists functionality like: Fast – Using a global anycast network of DNS servers around the world, Route 53 is designed to automatically route your users to the optimal location depending on network conditions. As a result, the service offers low query latency for your end users, as well as low update latency for your DNS record management needs. ... which sounds exactly like what GeoDNS is intended to do, but geographically directing clients is something they explicitly don't support it yet. Ultimately I am looking for the two following features from a DNS provider: Map multiple IP addresses to a single domain name (like google.com, amazon.com, etc. does) Utilize a DNS service that will respond to client requests for that domain with the IP address of the nearest server to the requestee. As mentioned, it seems like this is all part of an "Anycast" DNS service (all of which these services are), but the features and marketing I see from them suggest otherwise, making me think I need to learn a bit more about how DNS works before making a deployment choice. Thanks in advance for any clarifications.

    Read the article

  • How to setup Hadoop cluster so that it accepts mapreduce jobs from remote computers?

    - by drasto
    There is a computer I use for Hadoop map/reduce testing. This computer runs 4 Linux virtual machines (using Oracle virtual box). Each of them has Cloudera with Hadoop (distribution c3u4) installed and serves as a node of Hadoop cluster. One of those 4 nodes is master node running namenode and jobtracker, others are slave nodes. Normally I use this cluster from local network for testing. However when I try to access it from another network I cannot send any jobs to it. The computer running Hadoop cluster has public IP and can be reached over internet for another services. For example I am able to get HDFS (namenode) administration site and map/reduce (jobtracker) administration site (on ports 50070 and 50030 respectively) from remote network. Also it is possible to use Hue. Ports 8020 and 8021 are both allowed. What is blocking my map/reduce job submits from reaching the cluster? Is there some setting that I must change first in order to be able to submit map/reduce jobs remotely? Here is my mapred-site.xml file: <configuration> <property> <name>mapred.job.tracker</name> <value>master:8021</value> </property> <!-- Enable Hue plugins --> <property> <name>mapred.jobtracker.plugins</name> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value> <description>Comma-separated list of jobtracker plug-ins to be activated. </description> </property> <property> <name>jobtracker.thrift.address</name> <value>0.0.0.0:9290</value> </property> </configuration> And this is in /etc/hosts file: 192.168.1.15 master 192.168.1.14 slave1 192.168.1.13 slave2 192.168.1.9 slave3

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >