Search Results

Search found 35275 results on 1411 pages for 'white list'.

Page 13/1411 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • SQLAuthority News – Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 – Microsoft Whitepaper

    - by pinaldave
    I recently presented session on Statistics and Best Practices in Virtual Tech Days on Nov 22, 2010. The sessions was very popular and I got many questions right after the sessions. The number question I had received was where everybody can get the further information. I am very much happy that my sessions created some curiosity for one of the most important feature of the SQL Server. Statistics are the heart of the SQL Server. Microsoft has published a white paper on the subject how statistics are useful to Query Optimizer. Here is the abstract of the same white paper from Microsoft. Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 Writer: Eric N. Hanson and Yavor Angelov Microsoft SQL Server 2008 collects statistical information about indexes and column data stored in the database. These statistics are used by the SQL Server query optimizer to choose the most efficient plan for retrieving or updating data. This paper describes what data is collected, where it is stored, and which commands create, update, and delete statistics. By default, SQL Server 2008 also creates and updates statistics automatically, when such an operation is considered to be useful. This paper also outlines how these defaults can be changed on different levels (column, table, and database). In addition, it presents how certain query language features, such as Transact-SQL variables, interact with use of statistics by the optimizer, and it provides guidance for using these features when writing queries so you can obtain good query performance. Link to white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 ?Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Whitepaper – SQL Azure vs. SQL Server

    - by pinaldave
    SQL Server and SQL Azure are two Microsoft Products which goes almost together. There are plenty of misconceptions about SQL Azure. I have seen enough developers not planning for SQL Azure because they are not sure what exactly they are getting into. Some are confused thinking Azure is not powerful enough. I disagree and strongly urge all of you to read following white paper written and published by Microsoft. SQL Azure vs. SQL Server by Dinakar Nethi, Niraj Nagrani SQL Azure Database is a cloud-based relational database service from Microsoft. SQL Azure provides relational database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This paper compares SQL Azure Database with SQL Server in terms of logical administration vs. physical administration, provisioning, Transact-SQL support, data storage, SSIS, along with other features and capabilities. The content of this white paper is as following: Similarities and Differences Logical Administration vs. Physical Administration Provisioning Transact-SQL Support Features and Types Key Benefits of the Service Self-Managing High Availability Scalability Familiar Development Model Relational Data Model The above summary text is taken from white paper itself. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology Tagged: SQL Azure

    Read the article

  • SQLAuthority News – Download Whitepaper – A Case Study on “Hekaton” against RPM – SQL Server 2014 CTP1

    - by Pinal Dave
    In this new world of social media, apps and mobile devices, we are all now getting impatient. Automatic updates have spoiled few of our habits. When a new feature is released everybody wants to immediately adopt the feature and start using it. Though this is true in the world of apps and smart phones, but it is still not possible in the developer’s world. When new features are around, before we start using it, we need to spend quite a lots of time to understand it and test it. Once we are sold on the feature we refer the feature to our manager and eventually the entire organization makes decisions on upgrading to use the new feature. Similarly, when the new feature of In-Memory OLTP was announced, pretty much every SQL Server DBA wanted to implement that on their server. Through the implementation of the feature is not hard, it is not that easy as well. One has to do proper research about their own environment and workload before implementing this feature. Microsoft has recently released a Case Study on In-Memory OLTP feature. Here is the abstract from the white paper itself. I/O latch can cause session delays that impact application performance. This white paper describes the procedures and common I/O latch issues when migrating to Hekaton in SQL Server 2014. It also includes challenges that occurred during the migration and the performance analysis at different stages.  If you are going to implement In-Memory OLTP database, this is a good case study to refer. Download white paper from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server 2008 R2 Analysis Services Operations Guide

    - by pinaldave
    SQL Server Analysis Service (SSAS) has been always interesting subject for research. Analysis Services cubes are a very powerful tool in the hands of the business intelligence (BI) developer. They provide an easy way to expose even large data models directly to business users. Microsoft has published very informative white paper on Analysis Services Operations Guide. This white paper is authored by Thomas Kejser, John Sirmon, and Denny Lee. In this guide you will find information on how to test and run Microsoft SQL Server Analysis Services in SQL Server 2005, SQL Server 2008, and SQL Server 2008 R2 in a production environment. The focus of this guide is how you can test, monitor, diagnose, and remove production issues on even the largest scaled cubes. This paper also provides guidance on how to configure the server for best possible performance. It is the goal of this guide to make your operations processes as painless as possible, and to have you run with the best possible performance without any additional development effort to your deployed cubes. In this guide, you will learn how to get the best out of your existing data model by making changes transparent to the data model and by making configuration changes that improve the user experience of the cube. Download SQL Server 2008 R2 Analysis Services Operations Guide Note: Abstract taken white paper. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Convert List of one type to Array of another type using Dozer

    - by aheu
    I'm wondering how to convert a List of one type to an array of another type in Java using Dozer. The two types have all the same property names/types. For example, consider these two classes. public class A{ private String test = null; public String getTest(){ return this.test } public void setTest(String test){ this.test = test; } } public class B{ private String test = null; public String getTest(){ return this.test } public void setTest(String test){ this.test = test; } } I've tried this with no luck. List<A> listOfA = getListofAObjects(); Mapper mapper = DozerBeanMapperSingletonWrapper.getInstance(); B[] bs = mapper.map(listOfA, B[].class); I've also tried using the CollectionUtils class. CollectionUtils.convertListToArray(listOfA, B.class) Neither are working for me, can anyone tell me what I am doing wrong? The mapper.map function works fine if I create two wrapper classes, one containing a List and the other a b[]. See below: public class C{ private List<A> items = null; public List<A> getItems(){ return this.items; } public void setItems(List<A> items){ this.items = items; } } public class D{ private B[] items = null; public B[] getItems(){ return this.items; } public void setItems(B[] items){ this.items = items; } } This works oddly enough... List<A> listOfA = getListofAObjects(); C c = new C(); c.setItems(listOfA); Mapper mapper = DozerBeanMapperSingletonWrapper.getInstance(); D d = mapper.map(listOfA, D.class); B[] bs = d.getItems(); How do I do what I want to do without using the wrapper classes (C & D)? There has got to be an easier way... Thanks!

    Read the article

  • No Matching Function Error for inserting into a list in c++

    - by Josh Curren
    I am getting an error when I try to insert an item into a list (in C++). The error is that there is no matching function for call to the insert(). I also tried push_front() but got the same error. Here is the error message: main.cpp:38: error: no matching function for call to ‘std::list<Salesperson, std::allocator<Salesperson> >::insert(Salesperson&)’ /usr/lib/gcc/i686-pc-cygwin/4.3.4/include/c++/bits/list.tcc:99: note: candidates are: std::_List_iterator<_Tp> std::list<_Tp, _Alloc>::insert(std::_List_iterator<_Tp>, const _Tp&) [with _Tp = Salesperson, _Alloc = std::allocator<Salesperson>] /usr/lib/gcc/i686-pc-cygwin/4.3.4/include/c++/bits/stl_list.h:961: note: void std::list<_Tp, _Alloc>::insert(std::_List_iterator<_Tp>, size_t, const _Tp&) [with _Tp = Salesperson, _Alloc = std::allocator<Salesperson>] Here is the code: #include <stdlib.h> #include <iostream> #include <fstream> #include <string> #include <list> #include "Salesperson.h" #include "Salesperson.cpp" #include "OrderedList.h" #include "OrderedList.cpp" using namespace std; int main(int argc, char** argv) { cout << "\n------------ Asn 8 - Sales Report ------------" << endl; list<Salesperson> s; int id; string fName, lName; int numOfSales; string year; std::ifstream input("Sales.txt"); while( !std::getline(input, year, ',').eof() ) { input >> id; input >> lName; input >> fName; input >> numOfSales; Salesperson sp = Salesperson( id, fName, lName ); s.insert( sp ); //THIS IS LINE 38 ************************** for( int i = 0; i < numOfSales; i++ ) { double sale; input >> sale; sp.sales.insert( sale ); } } cout << endl; return (EXIT_SUCCESS); }

    Read the article

  • Optimized OCR black/white pixel algorithm

    - by eagle
    I am writing a simple OCR solution for a finite set of characters. That is, I know the exact way all 26 letters in the alphabet will look like. I am using C# and am able to easily determine if a given pixel should be treated as black or white. I am generating a matrix of black/white pixels for every single character. So for example, the letter I (capital i), might look like the following: 01110 00100 00100 00100 01110 Note: all points, which I use later in this post, assume that the top left pixel is (0, 0), bottom right pixel is (4, 4). 1's represent black pixels, and 0's represent white pixels. I would create a corresponding matrix in C# like this: CreateLetter("I", new List<List<bool>>() { new List<bool>() { false, true, true, true, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, false, true, false, false }, new List<bool>() { false, true, true, true, false } }); I know I could probably optimize this part by using a multi-dimensional array instead, but let's ignore that for now, this is for illustrative purposes. Every letter is exactly the same dimensions, 10px by 11px (10px by 11px is the actual dimensions of a character in my real program. I simplified this to 5px by 5px in this posting since it is much easier to "draw" the letters using 0's and 1's on a smaller image). Now when I give it a 10px by 11px part of an image to analyze with OCR, it would need to run on every single letter (26) on every single pixel (10 * 11 = 110) which would mean 2,860 (26 * 110) iterations (in the worst case) for every single character. I was thinking this could be optimized by defining the unique characteristics of every character. So, for example, let's assume that the set of characters only consists of 5 distinct letters: I, A, O, B, and L. These might look like the following: 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 After analyzing the unique characteristics of every character, I can significantly reduce the number of tests that need to be performed to test for a character. For example, for the "I" character, I could define it's unique characteristics as having a black pixel in the coordinate (3, 0) since no other characters have that pixel as black. So instead of testing 110 pixels for a match on the "I" character, I reduced it to a 1 pixel test. This is what it might look like for all these characters: var LetterI = new OcrLetter() { Name = "I", BlackPixels = new List<Point>() { new Point (3, 0) } } var LetterA = new OcrLetter() { Name = "A", WhitePixels = new List<Point>() { new Point(2, 4) } } var LetterO = new OcrLetter() { Name = "O", BlackPixels = new List<Point>() { new Point(3, 2) }, WhitePixels = new List<Point>() { new Point(2, 2) } } var LetterB = new OcrLetter() { Name = "B", BlackPixels = new List<Point>() { new Point(3, 1) }, WhitePixels = new List<Point>() { new Point(3, 2) } } var LetterL = new OcrLetter() { Name = "L", BlackPixels = new List<Point>() { new Point(1, 1), new Point(3, 4) }, WhitePixels = new List<Point>() { new Point(2, 2) } } This is challenging to do manually for 5 characters and gets much harder the greater the amount of letters that are added. You also want to guarantee that you have the minimum set of unique characteristics of a letter since you want it to be optimized as much as possible. I want to create an algorithm that will identify the unique characteristics of all the letters and would generate similar code to that above. I would then use this optimized black/white matrix to identify characters. How do I take the 26 letters that have all their black/white pixels filled in (e.g. the CreateLetter code block) and convert them to an optimized set of unique characteristics that define a letter (e.g. the new OcrLetter() code block)? And how would I guarantee that it is the most efficient definition set of unique characteristics (e.g. instead of defining 6 points as the unique characteristics, there might be a way to do it with 1 or 2 points, as the letter "I" in my example was able to). An alternative solution I've come up with is using a hash table, which will reduce it from 2,860 iterations to 110 iterations, a 26 time reduction. This is how it might work: I would populate it with data similar to the following: Letters["01110 00100 00100 00100 01110"] = "I"; Letters["00100 01010 01110 01010 01010"] = "A"; Letters["00100 01010 01010 01010 00100"] = "O"; Letters["01100 01010 01100 01010 01100"] = "B"; Now when I reach a location in the image to process, I convert it to a string such as: "01110 00100 00100 00100 01110" and simply find it in the hash table. This solution seems very simple, however, this still requires 110 iterations to generate this string for each letter. In big O notation, the algorithm is the same since O(110N) = O(2860N) = O(N) for N letters to process on the page. However, it is still improved by a constant factor of 26, a significant improvement (e.g. instead of it taking 26 minutes, it would take 1 minute). Update: Most of the solutions provided so far have not addressed the issue of identifying the unique characteristics of a character and rather provide alternative solutions. I am still looking for this solution which, as far as I can tell, is the only way to achieve the fastest OCR processing. I just came up with a partial solution: For each pixel, in the grid, store the letters that have it as a black pixel. Using these letters: I A O B L 01110 00100 00100 01100 01000 00100 01010 01010 01010 01000 00100 01110 01010 01100 01000 00100 01010 01010 01010 01000 01110 01010 00100 01100 01110 You would have something like this: CreatePixel(new Point(0, 0), new List<Char>() { }); CreatePixel(new Point(1, 0), new List<Char>() { 'I', 'B', 'L' }); CreatePixel(new Point(2, 0), new List<Char>() { 'I', 'A', 'O', 'B' }); CreatePixel(new Point(3, 0), new List<Char>() { 'I' }); CreatePixel(new Point(4, 0), new List<Char>() { }); CreatePixel(new Point(0, 1), new List<Char>() { }); CreatePixel(new Point(1, 1), new List<Char>() { 'A', 'B', 'L' }); CreatePixel(new Point(2, 1), new List<Char>() { 'I' }); CreatePixel(new Point(3, 1), new List<Char>() { 'A', 'O', 'B' }); // ... CreatePixel(new Point(2, 2), new List<Char>() { 'I', 'A', 'B' }); CreatePixel(new Point(3, 2), new List<Char>() { 'A', 'O' }); // ... CreatePixel(new Point(2, 4), new List<Char>() { 'I', 'O', 'B', 'L' }); CreatePixel(new Point(3, 4), new List<Char>() { 'I', 'A', 'L' }); CreatePixel(new Point(4, 4), new List<Char>() { }); Now for every letter, in order to find the unique characteristics, you need to look at which buckets it belongs to, as well as the amount of other characters in the bucket. So let's take the example of "I". We go to all the buckets it belongs to (1,0; 2,0; 3,0; ...; 3,4) and see that the one with the least amount of other characters is (3,0). In fact, it only has 1 character, meaning it must be an "I" in this case, and we found our unique characteristic. You can also do the same for pixels that would be white. Notice that bucket (2,0) contains all the letters except for "L", this means that it could be used as a white pixel test. Similarly, (2,4) doesn't contain an 'A'. Buckets that either contain all the letters or none of the letters can be discarded immediately, since these pixels can't help define a unique characteristic (e.g. 1,1; 4,0; 0,1; 4,4). It gets trickier when you don't have a 1 pixel test for a letter, for example in the case of 'O' and 'B'. Let's walk through the test for 'O'... It's contained in the following buckets: // Bucket Count Letters // 2,0 4 I, A, O, B // 3,1 3 A, O, B // 3,2 2 A, O // 2,4 4 I, O, B, L Additionally, we also have a few white pixel tests that can help: (I only listed those that are missing at most 2). The Missing Count was calculated as (5 - Bucket.Count). // Bucket Missing Count Missing Letters // 1,0 2 A, O // 1,1 2 I, O // 2,2 2 O, L // 3,4 2 O, B So now we can take the shortest black pixel bucket (3,2) and see that when we test for (3,2) we know it is either an 'A' or an 'O'. So we need an easy way to tell the difference between an 'A' and an 'O'. We could either look for a black pixel bucket that contains 'O' but not 'A' (e.g. 2,4) or a white pixel bucket that contains an 'O' but not an 'A' (e.g. 1,1). Either of these could be used in combination with the (3,2) pixel to uniquely identify the letter 'O' with only 2 tests. This seems like a simple algorithm when there are 5 characters, but how would I do this when there are 26 letters and a lot more pixels overlapping? For example, let's say that after the (3,2) pixel test, it found 10 different characters that contain the pixel (and this was the least from all the buckets). Now I need to find differences from 9 other characters instead of only 1 other character. How would I achieve my goal of getting the least amount of checks as possible, and ensure that I am not running extraneous tests?

    Read the article

  • Hibernate Query for a List of Objects that matches a List of Objects' ids

    - by sal
    Given a classes Foo, Bar which have hibernate mappings to tables Foo, A, B and C public class Foo { Integer aid; Integer bid; Integer cid; ...; } public class Bar { A a; B b; C c; ...; } I build a List fooList of an arbitrary size and I would like to use hibernate to fetch List where the resulting list will look something like this: Bar[1] = [X1,Y2,ZA,...] Bar[2] = [X1,Y2,ZB,...] Bar[3] = [X1,Y2,ZC,...] Bar[4] = [X1,Y3,ZD,...] Bar[5] = [X2,Y4,ZE,...] Bar[6] = [X2,Y4,ZF,...] Bar[7] = [X2,Y5,ZG,...] Bar[8] = ... Where each Xi, Yi and Zi represents a unique object. I know I can iterate fooList and fetch each List and call barList.addAll(...) to build the result list with something like this: List<bar> barList.addAll(s.createQuery("from Bar bar where bar.aid = :aid and ... ") .setEntity("aid", foo.getAid()) .setEntity("bid", foo.getBid()) .setEntity("cid", foo.getCid()) .list(); ); Is there any easier way, ideally one that makes better use of hibernate and make a minimal number of database calls? Am I missing something? Is hibernate not the right tool for this?

    Read the article

  • how to bind a list to a dropdown list in gridview

    - by user3721173
    I have a GridView that it contain a Drop-down list.I have a list that wanna to bind this list to drop-down in gridview. <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False"OnSelectedIndexChanged="GridView1_SelectedIndexChanged" OnRowDataBound="GridView1_RowDataBound"> <Columns> <ItemTemplate> <asp:Label ID="Label2" runat="server"></asp:Label> <asp:DropDownList ID="DropDownList3" runat="server" AppendDataBoundItems="True" OnSelectedIndexChanged="DropDownList3_SelectedIndexChanged1" > </asp:DropDownList> </ItemTemplate> </asp:TemplateField> </Columns> and protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e) { DropDownList dropdown = (DropDownList)e.Row.FindControl("DropDownList3"); ClassDal obj = new ClassDal(); List<phone> list = obj.GetAll(); dropdown.DataTextField = "phone"; dropdown.DataValueField = "id"; dropdown.DataSource = list.ToList(); dropdown.DataBind(); } and namespace sample_table { public class ClassDal { public List<phone> GetAll() { using (PracticeDBEntities1 context = new PracticeDBEntities1()) { return context.phone.ToList(); } } } } but i received this exception :Object reference not set to an instance of an object on the row: dropdown.DataTextField = "phone";

    Read the article

  • Is "White-Board-Coding" inappropriate during interviews?

    - by Eoin Campbell
    This is a somewhat subjective quesiton but I'd love to hear feedback/opinions from either interviewers/interviewees on the topic. We split our technical part into 4 parts. Write Code, Read & Analyse Code, Design Session & Code on the white board. For the last part what we ask interviewees to do is write a small code snippet (4-5 lines) on the whiteboard and explain as they go through it. Let me be clear the purpose is not to catch people out. We're not looking for perfect syntax. Hell it can even be pseudo-code. but the point is to give them a very simple problem and see if their brain can communicate the solution to us. By simple problems I mean "Reverse a string", "FizzBuzz" etc... EDIT Just with regards the comment about Pseudo-Code. We always ask for an explicit language first. We;re a .NET C# house. we've only said "pseudo-code" where someone has been blanking/really struggling with the code. My question is "Is it innappropriate / unreasonable to expect a programmer to write a code snippet on a whiteboard during an interview ?"

    Read the article

  • Absence Management White Papers to Assist with your Implementations

    - by Carolyn Cozart
    Absence Management Setup – Additional Resources PeopleSoft is committed to helping our customers sharing our knowledge expertise in our applications. We have prepared a collection of documents (White Papers) containing examples, tips, and techniques to help you when making important decisions during your Absence Management implementation.   These documents can all be found on My Oracle Support. Absence Management Entitlement and Take Setup This document (Document ID 1493866.1) provides an overview of how to set up the main components of Absence Management, such as Absence Entitlement and Take elements, as well as other supporting elements relevant to your Absence Management implementation. Absence Management System Elements This document (Document ID 1493879.1) provides an overview of the system elements related to Absence Management. System elements are building blocks used during the design and construction of your Absence Rules. Knowing how they work and when to use them should help you expedite the implementation of your Absence Policy rules in your company Absence Management Self Service Setup This document (Document ID 1493867.1) provides an overview and guidance on some of the important areas when setting up Absence Self Service. Throughout this document we are providing examples of different configurations supported in Self Service. 

    Read the article

  • Why is my shadowmap all white?

    - by Berend
    I was trying out a shadowmap. But all my shadow is white. I think there is some problem with my homogeneous component. Can anybody help me? The rest of my code is written in xna Here is the hlsl code I used float4x4 xWorld; float4x4 xView; float4x4 xProjection; struct VertexToPixel { float4 Position : POSITION; float4 ScreenPos : TEXCOORD1; float Depth : TEXCOORD2; }; struct PixelToFrame { float4 Color : COLOR0; }; //------- Technique: ShadowMap -------- VertexToPixel MyVertexShader(float4 inPos: POSITION0, float3 inNormal: NORMAL0) { VertexToPixel Output = (VertexToPixel)0; float4x4 preViewProjection = mul(xView, xProjection); float4x4 preWorldViewProjection = mul(xWorld, preViewProjection); Output.Position =mul(inPos, mul(xWorld, preViewProjection)); Output.Depth = Output.Position.z / Output.Position.w; Output.ScreenPos = Output.Position; return Output; } float4 MyPixelShader(VertexToPixel PSIn) : COLOR0 { PixelToFrame Output = (PixelToFrame)0; Output.Color = PSIn.ScreenPos.z/PSIn.ScreenPos.w; return Output.Color; } technique ShadowMap { pass Pass0 { VertexShader = compile vs_2_0 MyVertexShader(); PixelShader = compile ps_2_0 MyPixelShader(); } }

    Read the article

  • Ubuntu Server 11.10 boot, white terminal with garbled black text

    - by SpeedCrazy
    I just installed Ubuntu server 11.10 and the install went fine. This system is running on an Intel Pentium II board with onboard graphics. However when I try to boot into Ubuntu I get a white terminal with garbled black text. I have tried various grub 'fixes' as googling the issue seemed to suggest it was a res or grub related issue. I cannot ssh in so the issue does affect Linux as well. I have had no luck with anything thus far and am at my wits end. This was my first Ubuntu excursion as my friend told me it was better for servers than CentOS because it was easier... Not so much.... Does anyone have any ideas as to what the issue could be? When answering bear in mind I am an Ubuntu noob and Linux novice. As of 1/26/12 I have tried to add the console=ttyl line to the /etc/default/grub and run update-grub. This results in the line in the boot parameters that normally reads: linux /vmlunz-3.0.0-12-generic-pae root=/dev/mapper/dev-root rovt.handoff=7 now reads: linux /vmlunz-3.0.0-12-generic-pae root=/dev/mapper/dev-root ro console=ttyl vt.handoff=7 This does not work. Is there anyway to have console=ttyl inserted on a line by itself? I am at my wits ends, Thanks for all your help, Speed

    Read the article

  • Black or White Border/Shadow around PNGs in SDL/OPENGL

    - by Dylan
    having the same issue as this: Why do my sprites have a dark shadow/line/frame surrounding the texture? however, when I do the fix suggested there (changing GL_SRC_ALPHA to GL_ONE) it just replaces the black border with a white border on the images, and messes with my background color and some polygons I'm drawing (not all of them weirdly) by making them much lighter... any ideas? heres some of my relevant code. init: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_DEPTH_TEST); glEnable(GL_MULTISAMPLE); glEnable(GL_TEXTURE_2D); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glAlphaFunc(GL_GREATER, 0.01); glEnable(GL_ALPHA_TEST); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); when each texture is loaded: glGenTextures(1, &textureID); glBindTexture(GL_TEXTURE_2D, textureID); gluBuild2DMipmaps(GL_TEXTURE_2D, GL_RGBA, surface->w, surface->h, GL_BGRA, GL_UNSIGNED_BYTE, surface->pixels); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    Read the article

  • Ubuntu won't start - blank screen with flashing white cursor

    - by loomy
    My laptop is dual-booted with Windows 8 on one partition and Ubuntu 12.04.3 on the other. I've searched for my issue already, but nothing I've found so far has solved the problem. Since last week, when I try to boot Ubuntu from GRUB, I am taken to a purple screen (as usual), but then to a black screen with a blinking white _ cursor. I've tried leaving it, but nothing else happens. When I hold Shift and edit the GRUB entry to change 'quiet slash' to 'text', the back screen instead asks for my login and password. When I put them in, it tells me the date of my last logon, and then waits for further commands. Being very very new to Ubuntu, I have no idea at all what to try out at this point. I tried to launch FailsafeX, but while that was beginning, it said "unable to run server /usr/bin/x" No such file or directory", then shortly after returned to the recovery mode menu. Pressing Ctrl+Alt+Delete goes through an Ubuntu loading screen, then the laptop then restarts. Any suggestions will be very appreciated, and apologies if this is a common issue that has been answered a million times before.

    Read the article

  • Announcing a functional best practices White Paper for SIM and RMS integration

    - by Oracle Retail Documentation Team
    Oracle Retail has published a document on My Oracle Support (https://support.oracle.com) that provides you with guidance on how to adopt best practices that best facilitate the integration between the Oracle Retail Merchandising System (RMS) and the Oracle Retail Store Inventory Management System (SIM). Doc ID: 1424596.1This paper highlights some specific functional best practices when integrating Oracle Retail Merchandising System (RMS) and Oracle Retail Store Inventory Management (SIM). The list in this paper is not comprehensive. Topics include: Inventory adjustments Returns to vendor (RTV) Transfer shipping Receipts Receipt unit adjustments Stock order reconcoliation Stock counts Transformable items

    Read the article

  • Are there any useful tools to mirror a mailman mailing list as a forum?

    - by mar10
    We have a mailman mailing list however as we all know this is not very user friendly in terms of searching the archives. I am looking at a way to enable the continued functionality of mailman while having a forum linked to it for a more friendly user-interface approach. Is there a forum application that lets you mirror the mailman service so that posts to mailman are sync'd into the forum and posts to the forum are sync'd to mailman?

    Read the article

  • this is my first time asking here, I wanted to create a linked list while sorting it thanks

    - by user2738718
    package practise; public class Node { // node contains data and next public int data; public Node next; //constructor public Node (int data, Node next){ this.data = data; this.next = next; } //list insert method public void insert(Node list, int s){ //case 1 if only one element in the list if(list.next == null && list.data > s) { Node T = new Node(s,list); } else if(list.next == null && list.data < s) { Node T = new Node(s,null); list.next = T; } //case 2 if more than 1 element in the list // 3 elements present I set prev to null and curr to list then performed while loop if(list.next.next != null) { Node curr = list; Node prev = null; while(curr != null) { prev = curr; curr = curr.next; if(curr.data > s && prev.data < s){ Node T = new Node(s,curr); prev.next = T; } } // case 3 at the end checks for the data if(prev.data < s){ Node T = new Node(s,null); prev.next = T; } } } } // this is a hw problem, i created the insert method so i can check and place it in the correct order so my list is sorted This is how far I got, please correct me if I am wrong, I keep inserting node in the main method, Node root = new Node(); and root.insert() to add.

    Read the article

  • Mailing List Solution

    - by Shoaibi
    I have more than 1M users and i need to send newsletters to. I have tried PHPList but it has failed as it gets stalled every 30K emails. I need a faster and more reliable solution.

    Read the article

  • Wordpress Showing White Text [closed]

    - by Sakamoto Kazuma
    So I've heard about all the wordpress issues with plugins making the edit box show only white text. I've heard about how this is an issue and they're working to fix it. I think the issue I'm having is a bit different. I can see plain black clean nice readable editable text on most of the computers within my dept. However, one user is seeing the white text while using IE6. I've disabled and re-enabled all if his IE plugins including Gears and adobe. Still seeing the issue with just that one computer. I have run windows udpate, and cleared the cache, all cookies and cleaned up the registry. Still seeing white text. Checked user preferences, but noticed that any users using that computer to edit posts will see white text. Is there a fix or workaround?

    Read the article

  • Procmail Mailing List (With Access Control)

    - by bradlis7
    This seems like it should be fairly easy to do, but I've run into a few problems. I've added a cron job to parse all users whose UID is greater than 5000: * * * * * root /usr/bin/test /etc/passwd -nt ~allusers/.forward \ && /bin/egrep '([5-9]|[0-9]{2})[0-9]{3}' /etc/passwd | /bin/grep -v 65534 \ | /bin/cut -d ':' -f 1 > ~allusers/.forward Then I created a .procmailrc file: VERBOSE=yes LOGFILE=/var/log/procmailrc #Allow only certain users to send :0 * ^From.*[email protected].* {} :0E /dev/null But, the .forward file is processed before it even gets to procmail, evidently. If I moved the .forward file to another filename, can I use it in procmail to send an email to the users in this file?

    Read the article

  • List<> capacity has more items than added.

    - by Pete
    List <string> ali = new List<string>(); ali.Clear(); ali.Add("apple"); ali.Add("orange"); ali.Add("banana"); ali.Add("cherry"); ali.Add("mango"); ali.Add("plum"); ali.Add("jackfruit"); Console.WriteLine("the List has {0} items in it.",ali.Capacity.ToString()); when I run this the Console displays: the List has 8 items in it. I don't understand why its showing a capacity of 8, when I only added 7 items.

    Read the article

  • LIST<> AddRange throwing ArgumentException

    - by Tim
    Hi all, I have a particular method that is occasionally crashing with an ArgumentException: Destination array was not long enough. Check destIndex and length, and the array's lower bounds.: at System.Array.Copy(Array sourceArray, Int32 sourceIndex, Array destinationArray, Int32 destinationIndex, Int32 length, Boolean reliable) at System.Collections.Generic.List`1.CopyTo(T[] array, Int32 arrayIndex) at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection) at System.Collections.Generic.List`1.AddRange(IEnumerable`1 collection) The code that is causing this crash looks something like this: List<MyType> objects = new List<MyType>(100); objects = FindObjects(someParam); objects.AddRange(FindObjects(someOtherParam); According to MSDN, List<.AddRange() should automatically resize itself as needed: If the new Count (the current Count plus the size of the collection) will be greater than Capacity, the capacity of the List<(Of <(T)) is increased by automatically reallocating the internal array to accommodate the new elements, and the existing elements are copied to the new array before the new elements are added. Can someone think of a circumstance in which AddRange could throw this type of exception?

    Read the article

  • Advanced Python list comprehension

    - by Yuval A
    Given two lists: chars = ['ab', 'bc', 'ca'] words = ['abc', 'bca', 'dac', 'dbc', 'cba'] how can you use list comprehensions to generate a filtered list of words by the following condition: given that each word is of length n and chars is of length n as well, the filtered list should include only words that each i-th character is in the i string in words. In this case, we should get ['abc', 'bca'] as a result. (If this looks familiar to anyone, this was one of the questions in the previous Google code jam)

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >