Search Results

Search found 6753 results on 271 pages for 'forward declaration'.

Page 73/271 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • problem in accessing the path involving pagination using display tag

    - by sarah
    Hi All, I am using display tag for pagiantion and display of the data in table format the code is like <display:column title="Select" style="width: 90px;"> <input type="checkbox" name="optionSelected" value="<c:out value='${userList.loginName}'/>"/> </display:column> <display:column property="loginName" sortable="false" title="UserName" paramId="loginName" style="width: 150px; text-align:center" href="./editUser.do?method=editUser"/> the pagesize is one it will display one in a page and eidt page will be dispaly on click of login name,but when i go to the second page i get an error saying /views/editUser path not found.How exactly should i define the path ? the struts-config is like <!-- action for edit user --> <action path="/editUser" name="editUserForm" type="com.actions.UserManagementAction" parameter="method" input="/EditUser.jsp" scope="request"> <forward name="success" path="/views/EditUser.jsp" /> <forward name="failure" path="/views/failure.jsp" /> </action> Pleas tell me how should i define the access path for display tag as on click of edit link the first page works but not the second edit

    Read the article

  • Java JSP/Servlet: controller servlet throwing the famous stack overflow

    - by NoozNooz42
    I've read several docs and I don't get it: I know I'm doing something wrong but I don't understand what. I've got a website that is entirely dynamically generated: there's hardly any static content at all. So, trying to understand JSP/Servlet, I've written my own "front controller" intercepting every single query, it looks like this: <servlet-mapping> <servlet-name>defaultservlet</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> Basically I want any user request, like: example.org example.org/bar example.org/foo.html to all go through a default servlet which I've written. The servlet then examines the URI and find to which .jsp the request must be dispatched, and then does, after having set all the attributes correctly, a: RequestDispatcher dispatcher = getServletContext().getRequestDispatcher("/WEB-INF/jsp/index.jsp"); dispatcher.forward(req, resp); When I'm using a url-pattern (in web.xml) like, say, *.html, everything works fine. But when I change it to /* (to really intercept everything), I enter an endless loop and it ends up with a... StackOverflow :) When the request is dispatched, is the URI ".../WEB-INF/jsp/index.jsp" itself matched by the web.xml filter /* that I set? How should I do if I want to intercept everything using a /* url-pattern and yet be able to dispatch/forward/? I'm not asking about specs/Javadocs here: I'm really confused about the bigger picture and I'd need some explanation as to what could be going on. Am I not supposed to intercept really everything? If I can intercept everything, what should I be aware of regarding forwarding/dispatching?

    Read the article

  • Google Web Toolkit Asynchronous Call from a Service Implementation

    - by Thor Thurn
    I'm writing a simple Google Web Toolkit service which acts as a proxy, which will basically exist to allow the client to make a POST to a different server. The client essentially uses this service to request an HTTP call. The service has only one asynchronous method call, called ajax(), which should just forward the server response. My code for implementing the call looks like this: class ProxyServiceImpl extends RemoteServiceServlet implements ProxyService { @Override public Response ajax(String data) { RequestBuilder rb = /*make a request builder*/ RequestCallback rc = new RequestCallback() { @Override public void onResponseReceived(Response response) { /* Forward this response back to the client as the return value of the ajax method... somehow... */ } }; rb.sendRequest(data, requestCallback); return /* The response above... except I can't */; } } You can see the basic form of my problem, of course. The ajax() method is used asynchronously, but GWT decides to be smart and hide that from the dumb old developer, so they can just write normal Java code without callbacks. GWT services basically just do magic instead of accepting a callback parameter. The trouble arises, then, because GWT is hiding the callback object from me. I'm trying to make my own asynchronous call from the service implementation, but I can't, because GWT services assume that you behave synchronously in service implementations. How can I work around this and make an asynchronous call from my service method implementation?

    Read the article

  • Essence of BiMap in Google collections

    - by littleEinstein
    I am still quite puzzled at the BiMap in Google collections/Guava. It was claimed that The two bimaps are backed by the same data; any changes to one will appear in the other. I browsed through the source code, and I found the use of delegate in ForwardingMap. But in any actually subclass of StandardBiMap, I do see the data are put into both the forward and reverse map. So what is the essence, and why it claims to have saved space by keeping only one copy of the data? Is it just the actual objects are one set, but two distinct sets of references to these objects are still needed, one set maintained in forward map, and the other in reverse map? private V putInBothMaps(K key, V value, boolean force) { boolean containedKey = containsKey(key); if (containedKey && Objects.equal(value, get(key))) { return value; } if (force) { inverse().remove(value); } else if (containsValue(value)) { throw new IllegalArgumentException( "value already present: " + value); } V oldValue = super.put(key, value); updateInverseMap(key, containedKey, oldValue, value); return oldValue; }

    Read the article

  • Does Github.com have to create a merge commit when you merge from a fork ?

    - by Nishant
    I cloned the master and started doing he my work . Due to permissions I push the branch to my fork . I then sent a pull request to my master and someone with permission does the merge . I notice that Github.com creates a merge commit snapshot which to me looks like just a diff of the entire changes which is actually not necessary but helpful in the sense I can just look at merge commit to see the entire diff . I can see the same sha has as my own branch - hence it looks like the merge is an extra commit which probably aint nexeccary since its a fast forward ? master - a myfork(computer) - a->b->c myfork(github) - a->b->c Pull request myfork - master (which it says I can automatically merge) shows the entire diff and then when I merge it , it shows up as master - a->b->c-d . The d is a merge commit which I think it not really required because it is a fast forward ? Can someone explain why does this happen ? I think this is the same scenario if I rebase master if master had gone ahead , but that has not happened . Master is still at when I merge .

    Read the article

  • VB.NET switching from ADO.NET to LINQ

    - by Cj Anderson
    I'm VERY new to Linq. I have an application I wrote that is in VB.NET 2.0. Works great, but I'd like to switch this application to Linq. I use ADO.NET to load XML into a datatable. The XML file has about 90,000 records in it. I then use the Datatable.Select to perform searches against that Datatable. The search control is a free form textbox. So if the user types in terms it searches instantly. Any further terms that are typed in continue to restrict the results. So you can type in Bob, or type in Bob Barker. Or type in Bob Barker Price is Right. The more criteria typed in the more narrowed your result. I bind the results to a gridview. Moving forward what all do I need to do? From a high level, I assume I need to: 1) Go to Project Properties -- Advanced Compiler Settings and change the Target framework to 3.5 from 2.0. 2) Add the reference to System.XML.Linq, Add the Imports statement to the classes. So I'm not sure what the best approach is going forward after that. I assume I use XDocument.Load, then my search subroutine runs against the XDocument. Do I just do the standard Linq query for this sort of repeated search? Like so: var people = from phonebook in doc.Root.Elements("phonebook") where phonebook.Element("userid") = "whatever" select phonebook; Any tips on how to best implement?

    Read the article

  • How can I emulate Vim's * search in GNU Emacs?

    - by rq
    In Vim the * key in normal mode searches for the word under the cursor. In GNU Emacs the closest native equivalent would be: C-s C-w But that isn't quite the same. It opens up the incremental search mini buffer and copies from the cursor in the current buffer to the end of the word. In Vim you'd search for the whole word, even if you are in the middle of the word when you press *. I've cooked up a bit of elisp to do something similar: (defun find-word-under-cursor (arg) (interactive "p") (if (looking-at "\\<") () (re-search-backward "\\<" (point-min))) (isearch-forward)) That trots backwards to the start of the word before firing up isearch. I've bound it to C-+, which is easy to type on my keyboard and similar to *, so when I type C-+ C-w it copies from the start of the word to the search mini-buffer. However, this still isn't perfect. Ideally it would regexp search for "\<" word "\>" to not show partial matches (searching for the word "bar" shouldn't match "foobar", just "bar" on its own). I tried using search-forward-regexp and concat'ing \ but this doesn't wrap in the file, doesn't highlight matches and is generally pretty lame. An isearch-* function seems the best bet, but these don't behave well when scripted. Any ideas? Can anyone offer any improvements to the bit of elisp? Or is there some other way that I've overlooked?

    Read the article

  • git rebse onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • Copy recordset data into multiple sheets to avoid problem of maximum rows limit in Excel VBA

    - by Sam
    I am developing reporting application in Excel/vba 2003. VBA code sends search query to database and gets the data through recordset. It will then be copied to one of excel sheet. The rertrieved data looks like as shown below. ProductID--|---DateProcessed--|----State----- 1................|.. 1/1/2010..............|.....Picked Up 1................|.. 1/1/2010..............|.....Forward To Approver 1................|.. 1/2/2010..............|.....Approver Picked Up 1................|.. 1/3/2010..............|.....Approval Completed 2................|.. 1/1/2010..............|.....Picked Up 3................|.. 1/2/2010..............|.....Picked Up 3................|.. 1/2/2010..............|.....Forward To Approver The problem is data retrieved from search query is so huge that it goes above the excel row limit (65536 rows in excel 2003). So I want to split this data into two excel sheets. While spliting the data I want to ensure that the data for same product shoud remain in one sheet. For example, if the last record in the above result set is 65537th record then I also want to move all records for product 3 into new sheet. So sheet1 will contain records for product id 1 and 2 with total records = 65534. Sheet 2 will cotain records for product id 3 - with total records = 2. How can I acheive this in vba? If it is not possible, is there any alternative solution ? Thanks in Advance !

    Read the article

  • git rebase onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • Forwarding keypresses in GTK

    - by dguaraglia
    I'm writing a bit of code for a Gedit plugin. I'm using Python and the interface (obviously) is GTK. So, the issue I'm having is quite simple: I have a search box (a gtk.Entry) and right below I have a results box (a gtk.TreeView). Right after you type something in the search box you are presented a bunch of results, and I would like the user to be able to press the Up/Down keys to select one, Enter to choose it, and be done. Thing is, I can't seem to find a way to forward the Up/Down keypress to the TreeView. Currently I have this piece of code: def __onSearchKeyPress(self, widget, event): """ Forward up and down keys to the tree. """ if event.keyval in [gtk.keysyms.Up, gtk.keysyms.Down]: print "pressed up or down" e = gtk.gdk.Event(gtk.gdk.KEY_PRESS) e.keyval = event.keyval e.window = self.browser.window e.send_event = True self.browser.emit("key-press-event", e) return True I can clearly see I'm receiving the right kind of event, but the event I'm sending gets ignored by the TreeView. Any ideas? Thanks in advance people.

    Read the article

  • How to intercept touch events globally?

    - by mystify
    I have an view which is sometimes covered by some other views. However, if the user slides the finger across the screen, I want to slide that underlying view across the screen, too. I could start making custom views for all those covering subviews and forward all kinds of touch events, but that's somewhat cumbersome. Maybe there's some kind of notification or another way that a UIView or UIControl subclass can be aware of touch events happening right now, no matter where they are. In short: I need an UIView subclass or UIControl subclass which knows about any touch events happening on the entire screen. Or at least if tht's not possible, knowing about any touch events happening above itself in the same underlying superview. Another description: There are 20 views, all reside inside the same superview. The first view is covered by 19 others. But if the user slides across the screen, that first view must slide too, so it must be aware of touch events. Is there any better solution that making all 19 views forward touch events? (yes, all 19 views respond to touch events in this example)

    Read the article

  • Servlet that starts a thread only once for every visitor

    - by user858749
    Hey I want to implement a Java Servlet that starts a thread only once for every single user. Even on refresh it should not start again. My last approach brought me some trouble so no code^^. Any Suggestions for the layout of the servlet? public class LoaderServlet extends HttpServlet { // The thread to load the needed information private LoaderThread loader; // The last.fm account private String lfmaccount; public LoaderServlet() { super(); lfmaccount = ""; } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if (loader != null) { response.setContentType("text/plain"); response.setHeader("Cache-Control", "no-cache"); PrintWriter out = response.getWriter(); out.write(loader.getStatus()); out.flush(); out.close(); } else { loader = new LoaderThread(lfmaccount); loader.start(); request.getRequestDispatcher("WEB-INF/pages/loader.jsp").forward( request, response); } } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { if (lfmaccount.isEmpty()) { lfmaccount = request.getSession().getAttribute("lfmUser") .toString(); } request.getRequestDispatcher("WEB-INF/pages/loader.jsp").forward( request, response); } } The jsp uses ajax to regularly post to the servlet and get the status. The thread just runs like 3 minutes, crawling some last.fm data.

    Read the article

  • Mysterious HttpSession and session-config dependency

    - by OneMoreVladimir
    Good day. I'm developing a Java web app with Servlets\JSP using Tomcat 7.0. During request from client I put and object into the session and use forward. After the forward processing the same request the object can be retreived if the secure parameter is false otherwise it is not stored in session. <session-config> <session-timeout>15</session-timeout> <cookie-config> <http-only>true</http-only> <secure>true</secure> </cookie-config> <tracking-mode>COOKIE</tracking-mode> </session-config> I've figured out that "...cookies can be created with the 'secure' flag, which ensures that the browser will never transmit the specified cookie over non-SSL...". I've configured Tomcat to use SSL, but that haven't helped. Changing the tracking mode to SSL haven't helped as well. How do session-config and HttpSession object correlate in this case? What could be the problem?

    Read the article

  • Can Drupal Taxonomy module be used to categorize court records and briefs?

    - by DKinzer
    I'm currently working on project that involves moving a database of documents for court records and briefs over to a Drupal environment. One of the problems that we are faced with is how to index these documents. In our court district, records and briefs all have a docket number which is assigned to a case. The interesting thing is that when multiple cases merge the docket numbers associated to the case become synonymous: Case 1, documents have Doceket No. A Case 2, documents have Docket No. B If case Cases 1 and Case 2 merge, then Docket No. A = Docket No. B My first inclination is to create Docket Vocabulary and have the terms of this Taxonomy be the docket numbers. I am hoping to take advantage of the fact that terms can be synonymous. I understand that there are several functions in the Taxonomy module that I may be able to take advantage, of including: taxonomy_get_synonyms taxonomy_get_related But I'm having problems convincing my collegues that this is the way to go, and frankly I'm not certain it's the right solution either. If anyone has had a similar issue and can offer some guidance as to how to move forward, I would greatly appreciate it. Thanks! D I've asked a related question (which I would also need to answer in order to move forward with this solution): http://stackoverflow.com/questions/2656247/can-drupal-terms-in-different-taxonomies-be-synonymous

    Read the article

  • Write transparent HTTP Proxy script in PHP

    - by Leo Izen
    Is there an easy forwarding/transparent php proxy script that I can host on my web server? These are my conditions: I'm using free web hosting, so I have pretty much no control over my machine. Otherwise I could use Perl's HTTP::Proxy module. This means no root password. It does run php though. I already have a server running on port 80. What I mean is I would like to put a php script as index.php on my server that will forward all requests. I don't want a script like PHProxy or Glype where I go to the site, then enter a URL. I want a server so I can enter proxy.example.com:80 in Firefox's or IE's or whatever's proxy settings and it will forward all requests to the server. Preferably (though not fatal if not possible) I would like for it to pass on the USER_AGENT environmental variable (That's the browser) instead of setting itself to be the USER_AGENT I can't start a new Daemon. My server won't allow it. Is there a script that will do this? If so, which?

    Read the article

  • SQL Server 2008, Books Online, and old documentation...

    - by Chris J
    [I have no idea if stackoverflow really is right right place for this, but don't know how many devs on here run into msi issues with SQL Server; suggest SuperUser or ServerFault if folk think it's better on either of those] About a year ago, when we were looking at moving our codebase forward and migrating to SQL Server 2008, I pulled down a copy of Books Online from the MSDN. Reviewed, did background research, fed results upstream, grabbed Express and tinkered with that. Then we got the nod to move forward (hurrah!) this past couple of weeks. So armed with Developer Edition, and running through the install, I've since found out I've zapped the Books Online MSI, no-ones got a copy of it, and Microsoft only have a later version (Oct 2009) available, so damned if I can update my SQL Server fully and properly... {mutter grumble}. Does anyone know if old versions of Books Online are available for download anywhere? Poking around the Microsoft download centre can't find it, neither is my google-fu finding it. For reference, I'm looking for SQLServer2008_BOL_August2008_ENU.msi ... This may just be a case of good ol' manual delete the files and (try) and clean up the registry :-(

    Read the article

  • JavaScript Fails to load content correctly on start

    - by Gaz_Edge
    I have inherited an image gallery constructed using javascript. I believe it uses a variation of JQuery, but it seems to have been heavily edited. The gallery loads the first image on page load. Each image has comments below them. There are forward and backward arrows that move between images. The problem I seem to have is that if there is a comment on image one, it is not loaded on page start, the only way to get it to display is to click forward to the next image, then click back to image one. The comment is then shown. All the comments are loaded into a script in the header under var sent_comments = [{COMMENTS}] The comment HTML code on loading is as follows; <div id="comments_container"> <div id="comments"> </div> </div> Once I have clicked to image two and then back to image one, the code is; <div id="comments_container"> <div id="comments"> SOME COMMENT </div> </div> Previous image button is: <a class="prev save_state" href="#"> <img src="/next.png?" alt="Next"> </a> Im new to javascript so not really sure what I will need to do to ensure the comment is loaded for the first image when the page loads. Code here: http://static.albumexposure.net/assets/presentation-9f47b054d96f9ec2459e89619a5fe9b4.js http://static.albumexposure.net/assets/presentations/book-699d82a2beb39df264e54c5d2578e0e4.js

    Read the article

  • Internal typedef and circular dependency

    - by bcr
    I have two classes whose functions take typedefed pointers to eachother as return values and parameters. I.e.: class Segment; class Location : public Fwk::NamedInterface { public: // ===== Class Typedefs ===== typedef Fwk::Ptr<Location const> PtrConst; typedef Fwk::Ptr<Location> Ptr; // ===== Class Typedefs End ===== void segmentIs(Segment::Ptr seg); /* ... */ } and class Location; class Segment : public Fwk::NamedInterface { public: // ===== Class Typedefs ===== typedef Fwk::Ptr<Segment const> PtrConst; typedef Fwk::Ptr<Segment> Ptr; // ===== Class Typedefs End ===== void locationIs(Location::Ptr seg); /* ... */ } This understandably generated linker errors...which the forward declarations of the respective classes don't fix. How can I forward declare the Ptr and PtrConst typedefs while keeping these typedefs internal to the class (i.e. I would like to write Location::Ptr to refer to the location pointer type)? Thanks folks!

    Read the article

  • One controller with multiple models? Am I doing this correctly?

    - by user363243
    My web app, up until this point, has been fairly straight forward. I have Users, Contacts, Appointments and a few other things to manage. All of these are easy - it's just one model per section so I just did a scaffold for each, then modified the scaffolded code to fit my need. Pretty easy... Unfortunately I am having a problem on this next section because I want the 'Financials' section of my app to be more in depth than the other sections which I simply scaffolded. For example, when the user clicks the 'Contacts' link on the navigation bar, it just shows a list of contacts, pretty straight forward and is in line with the scaffold. However, when the user clicks the 'Financials' link on the navigation bar, I want to show the bank accounts on the left of the page and a few of the transactions on the right. So the financials tab will basically work with data from two models: transactions and bank_accounts. I think I should make the models (transactions & bank_accounts) and then make a controller called Financials, then I can query the models from the Financials controller and display the pages in app/views/financials/ Am I correct in this app layout? I have never worked with more than the basics of scaffolding so I want to ensure I get this right! Thank you!

    Read the article

  • Parsing concatenated, non-delimited XML messages from TCP-stream using C#

    - by thaller
    I am trying to parse XML messages which are send to my C# application over TCP. Unfortunately, the protocol can not be changed and the XML messages are not delimited and no length prefix is used. Moreover the character encoding is not fixed but each message starts with an XML declaration <?xml>. The question is, how can i read one XML message at a time, using C#. Up to now, I tried to read the data from the TCP stream into a byte array and use it through a MemoryStream. The problem is, the buffer might contain more than one XML messages or the first message may be incomplete. In these cases, I get an exception when trying to parse it with XmlReader.Read or XmlDocument.Load, but unfortunately the XmlException does not really allow me to distinguish the problem (except parsing the localized error string). I tried using XmlReader.Read and count the number of Element and EndElement nodes. That way I know when I am finished reading the first, entire XML message. However, there are several problems. If the buffer does not yet contain the entire message, how can I distinguish the XmlException from an actually invalid, non-well-formed message? In other words, if an exception is thrown before reading the first root EndElement, how can I decide whether to abort the connection with error, or to collect more bytes from the TCP stream? If no exception occurs, the XmlReader is positioned at the start of the root EndElement. Casting the XmlReader to IXmlLineInfo gives me the current LineNumber and LinePosition, however it is not straight forward to get the byte position where the EndElement really ends. In order to do that, I would have to convert the byte array into a string (with the encoding specified in the XML declaration), seek to LineNumber,LinePosition and convert that back to the byte offset. I try to do that with StreamReader.ReadLine, but the stream reader gives no public access to the current byte position. All this seams very inelegant and non robust. I wonder if you have ideas for a better solution. Thank you. EDIT: I looked around and think that the situation is as follows (I might be wrong, corrections are welcome): I found no method so that the XmlReader can continue parsing a second XML message (at least not, if the second message has an XmlDeclaration). XmlTextReader.ResetState could do something similar, but for that I would have to assume the same encoding for all messages. Therefor I could not connect the XmlReader directly to the TcpStream. After closing the XmlReader, the buffer is not positioned at the readers last position. So it is not possible to close the reader and use a new one to continue with the next message. I guess the reason for this is, that the reader could not successfully seek on every possible input stream. When XmlReader throws an exception it can not be determined whether it happened because of an premature EOF or because of a non-wellformed XML. XmlReader.EOF is not set in case of an exception. As workaround I derived my own MemoryBuffer, which returns the very last byte as a single byte. This way I know that the XmlReader was really interested in the last byte and the following exception is likely due to a truncated message (this is kinda sloppy, in that it might not detect every non-wellformed message. However, after appending more bytes to the buffer, sooner or later the error will be detected. I could cast my XmlReader to the IXmlLineInfo interface, which gives access to the LineNumber and the LinePosition of the current node. So after reading the first message I remember these positions and use it to truncate the buffer. Here comes the really sloppy part, because I have to use the character encoding to get the byte position. I am sure you could find test cases for the code below where it breaks (e.g. internal elements with mixed encoding). But up to now it worked for all my tests. The parser class follows here -- may it be useful (I know, its very far from perfect...) class XmlParser { private byte[] buffer = new byte[0]; public int Length { get { return buffer.Length; } } // Append new binary data to the internal data buffer... public XmlParser Append(byte[] buffer2) { if (buffer2 != null && buffer2.Length > 0) { // I know, its not an efficient way to do this. // The EofMemoryStream should handle a List<byte[]> ... byte[] new_buffer = new byte[buffer.Length + buffer2.Length]; buffer.CopyTo(new_buffer, 0); buffer2.CopyTo(new_buffer, buffer.Length); buffer = new_buffer; } return this; } // MemoryStream which returns the last byte of the buffer individually, // so that we know that the buffering XmlReader really locked at the last // byte of the stream. // Moreover there is an EOF marker. private class EofMemoryStream: Stream { public bool EOF { get; private set; } private MemoryStream mem_; public override bool CanSeek { get { return false; } } public override bool CanWrite { get { return false; } } public override bool CanRead { get { return true; } } public override long Length { get { return mem_.Length; } } public override long Position { get { return mem_.Position; } set { throw new NotSupportedException(); } } public override void Flush() { mem_.Flush(); } public override long Seek(long offset, SeekOrigin origin) { throw new NotSupportedException(); } public override void SetLength(long value) { throw new NotSupportedException(); } public override void Write(byte[] buffer, int offset, int count) { throw new NotSupportedException(); } public override int Read(byte[] buffer, int offset, int count) { count = Math.Min(count, Math.Max(1, (int)(Length - Position - 1))); int nread = mem_.Read(buffer, offset, count); if (nread == 0) { EOF = true; } return nread; } public EofMemoryStream(byte[] buffer) { mem_ = new MemoryStream(buffer, false); EOF = false; } protected override void Dispose(bool disposing) { mem_.Dispose(); } } // Parses the first xml message from the stream. // If the first message is not yet complete, it returns null. // If the buffer contains non-wellformed xml, it ~should~ throw an exception. // After reading an xml message, it pops the data from the byte array. public Message deserialize() { if (buffer.Length == 0) { return null; } Message message = null; Encoding encoding = Message.default_encoding; //string xml = encoding.GetString(buffer); using (EofMemoryStream sbuffer = new EofMemoryStream (buffer)) { XmlDocument xmlDocument = null; XmlReaderSettings settings = new XmlReaderSettings(); int LineNumber = -1; int LinePosition = -1; bool truncate_buffer = false; using (XmlReader xmlReader = XmlReader.Create(sbuffer, settings)) { try { // Read to the first node (skipping over some element-types. // Don't use MoveToContent here, because it would skip the // XmlDeclaration too... while (xmlReader.Read() && (xmlReader.NodeType==XmlNodeType.Whitespace || xmlReader.NodeType==XmlNodeType.Comment)) { }; // Check for XML declaration. // If the message has an XmlDeclaration, extract the encoding. switch (xmlReader.NodeType) { case XmlNodeType.XmlDeclaration: while (xmlReader.MoveToNextAttribute()) { if (xmlReader.Name == "encoding") { encoding = Encoding.GetEncoding(xmlReader.Value); } } xmlReader.MoveToContent(); xmlReader.Read(); break; } // Move to the first element. xmlReader.MoveToContent(); // Read the entire document. xmlDocument = new XmlDocument(); xmlDocument.Load(xmlReader.ReadSubtree()); } catch (XmlException e) { // The parsing of the xml failed. If the XmlReader did // not yet look at the last byte, it is assumed that the // XML is invalid and the exception is re-thrown. if (sbuffer.EOF) { return null; } throw e; } { // Try to serialize an internal data structure using XmlSerializer. Type type = null; try { type = Type.GetType("my.namespace." + xmlDocument.DocumentElement.Name); } catch (Exception e) { // No specialized data container for this class found... } if (type == null) { message = new Message(); } else { // TODO: reuse the serializer... System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(type); message = (Message)ser.Deserialize(new XmlNodeReader(xmlDocument)); } message.doc = xmlDocument; } // At this point, the first XML message was sucessfully parsed. // Remember the lineposition of the current end element. IXmlLineInfo xmlLineInfo = xmlReader as IXmlLineInfo; if (xmlLineInfo != null && xmlLineInfo.HasLineInfo()) { LineNumber = xmlLineInfo.LineNumber; LinePosition = xmlLineInfo.LinePosition; } // Try to read the rest of the buffer. // If an exception is thrown, another xml message appears. // This way the xml parser could tell us that the message is finished here. // This would be prefered as truncating the buffer using the line info is sloppy. try { while (xmlReader.Read()) { } } catch { // There comes a second message. Needs workaround for trunkating. truncate_buffer = true; } } if (truncate_buffer) { if (LineNumber < 0) { throw new Exception("LineNumber not given. Cannot truncate xml buffer"); } // Convert the buffer to a string using the encoding found before // (or the default encoding). string s = encoding.GetString(buffer); // Seek to the line. int char_index = 0; while (--LineNumber > 0) { // Recognize \r , \n , \r\n as newlines... char_index = s.IndexOfAny(new char[] {'\r', '\n'}, char_index); // char_index should not be -1 because LineNumber>0, otherwise an RangeException is // thrown, which is appropriate. char_index++; if (s[char_index-1]=='\r' && s.Length>char_index && s[char_index]=='\n') { char_index++; } } char_index += LinePosition - 1; var rgx = new System.Text.RegularExpressions.Regex(xmlDocument.DocumentElement.Name + "[ \r\n\t]*\\>"); System.Text.RegularExpressions.Match match = rgx.Match(s, char_index); if (!match.Success || match.Index != char_index) { throw new Exception("could not find EndElement to truncate the xml buffer."); } char_index += match.Value.Length; // Convert the character offset back to the byte offset (for the given encoding). int line1_boffset = encoding.GetByteCount(s.Substring(0, char_index)); // remove the bytes from the buffer. buffer = buffer.Skip(line1_boffset).ToArray(); } else { buffer = new byte[0]; } } return message; } }

    Read the article

  • Accelerated C++, problem 5-6 (copying values from inside a vector to the front)

    - by Darel
    Hello, I'm working through the exercises in Accelerated C++ and I'm stuck on question 5-6. Here's the problem description: (somewhat abbreviated, I've removed extraneous info.) 5-6. Write the extract_fails function so that it copies the records for the passing students to the beginning of students, and then uses the resize function to remove the extra elements from the end of students. (students is a vector of student structures. student structures contain an individual student's name and grades.) More specifically, I'm having trouble getting the vector.insert function to properly copy the passing student structures to the start of the vector students. Here's the extract_fails function as I have it so far (note it doesn't resize the vector yet, as directed by the problem description; that should be trivial once I get past my current issue.) // Extract the students who failed from the "students" vector. void extract_fails(vector<Student_info>& students) { typedef vector<Student_info>::size_type str_sz; typedef vector<Student_info>::iterator iter; iter it = students.begin(); str_sz i = 0, count = 0; while (it != students.end()) { // fgrade tests wether or not the student failed if (!fgrade(*it)) { // if student passed, copy to front of vector students.insert(students.begin(), it, it); // tracks of the number of passing students(so we can properly resize the array) count++; } cout << it->name << endl; // output to verify that each student is iterated to it++; } } The code compiles and runs, but the students vector isn't adding any student structures to its front. My program's output displays that the students vector is unchanged. Here's my complete source code, followed by a sample input file (I redirect input from the console by typing " < grades" after the compiled program name at the command prompt.) #include <iostream> #include <string> #include <algorithm> // to get the declaration of `sort' #include <stdexcept> // to get the declaration of `domain_error' #include <vector> // to get the declaration of `vector' //driver program for grade partitioning examples using std::cin; using std::cout; using std::endl; using std::string; using std::domain_error; using std::sort; using std::vector; using std::max; using std::istream; struct Student_info { std::string name; double midterm, final; std::vector<double> homework; }; bool compare(const Student_info&, const Student_info&); std::istream& read(std::istream&, Student_info&); std::istream& read_hw(std::istream&, std::vector<double>&); double median(std::vector<double>); double grade(double, double, double); double grade(double, double, const std::vector<double>&); double grade(const Student_info&); bool fgrade(const Student_info&); void extract_fails(vector<Student_info>& v); int main() { vector<Student_info> vs; Student_info s; string::size_type maxlen = 0; while (read(cin, s)) { maxlen = max(maxlen, s.name.size()); vs.push_back(s); } sort(vs.begin(), vs.end(), compare); extract_fails(vs); // display the new, modified vector - it should be larger than // the input vector, due to some student structures being // added to the front of the vector. cout << "count: " << vs.size() << endl << endl; vector<Student_info>::iterator it = vs.begin(); while (it != vs.end()) cout << it++->name << endl; return 0; } // Extract the students who failed from the "students" vector. void extract_fails(vector<Student_info>& students) { typedef vector<Student_info>::size_type str_sz; typedef vector<Student_info>::iterator iter; iter it = students.begin(); str_sz i = 0, count = 0; while (it != students.end()) { // fgrade tests wether or not the student failed if (!fgrade(*it)) { // if student passed, copy to front of vector students.insert(students.begin(), it, it); // tracks of the number of passing students(so we can properly resize the array) count++; } cout << it->name << endl; // output to verify that each student is iterated to it++; } } bool compare(const Student_info& x, const Student_info& y) { return x.name < y.name; } istream& read(istream& is, Student_info& s) { // read and store the student's name and midterm and final exam grades is >> s.name >> s.midterm >> s.final; read_hw(is, s.homework); // read and store all the student's homework grades return is; } // read homework grades from an input stream into a `vector<double>' istream& read_hw(istream& in, vector<double>& hw) { if (in) { // get rid of previous contents hw.clear(); // read homework grades double x; while (in >> x) hw.push_back(x); // clear the stream so that input will work for the next student in.clear(); } return in; } // compute the median of a `vector<double>' // note that calling this function copies the entire argument `vector' double median(vector<double> vec) { typedef vector<double>::size_type vec_sz; vec_sz size = vec.size(); if (size == 0) throw domain_error("median of an empty vector"); sort(vec.begin(), vec.end()); vec_sz mid = size/2; return size % 2 == 0 ? (vec[mid] + vec[mid-1]) / 2 : vec[mid]; } // compute a student's overall grade from midterm and final exam grades and homework grade double grade(double midterm, double final, double homework) { return 0.2 * midterm + 0.4 * final + 0.4 * homework; } // compute a student's overall grade from midterm and final exam grades // and vector of homework grades. // this function does not copy its argument, because `median' does so for us. double grade(double midterm, double final, const vector<double>& hw) { if (hw.size() == 0) throw domain_error("student has done no homework"); return grade(midterm, final, median(hw)); } double grade(const Student_info& s) { return grade(s.midterm, s.final, s.homework); } // predicate to determine whether a student failed bool fgrade(const Student_info& s) { return grade(s) < 60; } Sample input file: Moo 100 100 100 100 100 100 100 100 Fail1 45 55 65 80 90 70 65 60 Moore 75 85 77 59 0 85 75 89 Norman 57 78 73 66 78 70 88 89 Olson 89 86 70 90 55 73 80 84 Peerson 47 70 82 73 50 87 73 71 Baker 67 72 73 40 0 78 55 70 Davis 77 70 82 65 70 77 83 81 Edwards 77 72 73 80 90 93 75 90 Fail2 55 55 65 50 55 60 65 60 Thanks to anyone who takes the time to look at this!

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SOLVED Install MythTV & 11.10 on Lenovo S12 (Intel atom) with wireless

    - by keepitsimpleengineer
    This is how I installed Ubuntu 11.10 and MythTV client on my Lenovo S12 (Intel Atom) laptop and use it using WiFi (see additional notes at end). I did this because the upgrade from 11.04 bricked the laptop. Note that the partitions on the Lenovo standard disk were already in place for this installation. Also note that my LAN is setup for fixed IP addresses. Downloaded and burned 11.10 x86 Desktop Ubuntu CD Connected the power supply cord, LAN wire and the external DVD USB drive. Ran Windows XP and made sure performance level "Performance" was set and "Wireless" was enabled. Booted S12 from CD Disabled Networking from icon on upper left panel icon Edited Connections… "Wired connection 1" ? Set IP address, accepted default netmask and set gateway. Also set DNS server. Good idea to check "Connection Information" here to verify everything's O.K. Selected Install Ubuntu from the initial "Install" window Verified the three items were checked (required disk space available, plugged into a power source, & connected to the Internet) Selected Download updates while installing and third party software. Hit Continue… At wireless selected don't want to connect…WiFi…now. Continue… At Installation type, selected Something else. Continue… At partition tale, selected the ext4 Linux partition, set the mount point as "/", and marked for formatting. Here I selected the main disk (/sda) for installing the boot manager. Continue… Selected or verified my Time zone. Continue… Selected my keyboard layout. Continue… Filled in the who are you fields. Make sure password is required to sign in is checked. Continue… Chose a picture. Continue… I selected import no accounts. Continue… Wait as the Install creeps along. If your screen goes blank, tap the space bar ? apparently the screen saver/power plan does this. There are several progress bars. The longest was "Installing system", and it was the next to the last one. Installation Complete window appears, Restart Now… Wait as it stops, The screen blanks then the message "…remove…media…close tray…press enter" I just unplugged the USB DVD and hit enter… It was disheartening but the screen turned Ubuntu Purple-beige and nothing happened, so I help down the power key until it shut down, the pressed it again and the Grub Boot screen appeared. Select Ubuntu… 25.The screen went blank with the little flashing underscore cursor on it and the disk light would occasionally flash. I hit the enter key and eventuality Ubuntu started. After a somewhat long time the unity desktop appeared. 11.10, unlike earlier versions, retains the connection information. Check this by checking the network icon on the upper left applet panel. Here the touch-pad·mouse quit working and I had to reboot. It takes and extremely long time to boot, sometimes requiring several power off/ power on (cold boot). You can try to get the default network manager to work, but it might not, it didn't on mine for WiFi. Thanks to: Chris at URL here's what to do… disconnect your wired Internet connection. input your wireless information into network manager open a terminal (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "terminal". Might be a good idea to drag and drop the terminal icon to the terminal, it's easy to get rid of later. click to open a terminal, and type in: sudo rmmod acer_wmi && echo "blacklist acer_wmi" >> /etc/modprobe.d/blacklist.conf and hit enter. type in your password as asked. if you have correctly entered your WiFi information and you are near your AP, you should connect immediately if not, see the URL above ? you might need to replace "network manager" with "wicd" ? I did with 11.04. Update the new 11.10, in the upper left panel applet weird·gear icon is menu with a line about updating. It's the new way to invoke Update Manager. Your lenovo S12 (intel atom) should now run the new unity Ubuntu. Point your elbow at the ceiling and pat yourself on the back. Installing Mythbuntu Client 24.1 Open mythbuntu.org/repos (I urge you not to directly use Ubuntu Software Center for this) Install Mythbuntu Repos Save the file (in ~/Downloads, the default) Run the file ? it will update your repositories so that you will get the proper installation sources ? it will start Ubuntu Software Center to do this ? Click Install… You will need your password. Debconf window will open, select by making sure check mark is in the little box "Would you like to activate…". Forward… Which version? At the time of writing the current "Stable" version was 24.1, select 0.24.x… Forward… Read the message, then forward… Delete the downloaded file. Install synaptic (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "synaptic". Click on the synaptic icon. Ubuntu Software Center will open and allow you to install synaptic package manager. Open Synaptic (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "Synaptic". Might be a good idea to drag and drop the terminal icon to the terminal, it's easy to get rid of later. Run synaptic, read the intro, and close the intro window. Type in mythbuntu-control-centre in the Quick filter text box, and then select it "Mark for installation" by clicking on the box next to it's name. Marvel at the additional to be installed items, then select "?Mark"… At the top of the synaptic window click on the "? Apply" button. Marvel at the amount of stuff to be installed, the click on "Apply". When finished, close finished window and synaptic. Open mythbuntu-control-centre (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythbuntu". Might be a good idea to drag and drop the mythbuntu-control-centre icon to the terminal, it's easy to get rid of later. You can now configure and install the frontend. Go down the icon totem on the right side of the window and click as needed… System roles. ? No Backend, Desktop Frontend, and Ubuntu Desktop. Apply… & Apply changes… & Password… MySQL Configuration ? from backend ? Setup General Alt-N(ext) Alt-N(ext) Stetting Access Setup PIN code: ~~~~ Input Security key and click "Test Connection", if ?, then Apply… & Apply… {note: for some inexplicable reason, control centre hung on this, but when I restarted it, it was set properly} Graphics drivers, When I did this, only the Broadcom wireless driver showed up. I closed without doing anything. Services. I enabled SSH & Samba. Apply… & Apply… Repositories. Asked & Answered. MythExport. Pass, I believe it requires backend on the same system. Proprietary Codec Support. Check to enable, Apply… & Apply… System Updates. No action necessary, will be a part of the Ubuntu update mechanism. Themes and Artwork. For themes, I selected Enable/Update all. Apply… & Apply… Infrared & Startup behavior and Plugins. Defer until you know more. Close software centre. Open mythTV (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythTV". Might be a good idea to drag and drop the mythTV icon to the terminal, it's easy to get rid of later. Incorrect Group Membership. Fix this by clicking "Yes"… Log out/end. Do this by clicking "Yes"… For my Lenovo S12, I had to manually restart Ubuntu - and still with the very long restart…/no start/cold boot/reboot/pressing the shift key required Open mythTV (unity dash, top of icon totem, open, and make sure the ruler&pen icon on the bottom is selected, 2nd from left) type in "mythTV". Might be a good idea to drag and drop the mythTV icon to the terminal, it's easy to get rid of later. Will open with Select country & language. Do so. then get message with "No", hit "Ok" and arrive at the data base Configuration 1/2 screen. You will need your brackend password, from backend ? Setup General Database Configuration 1/2 Password:~? Enter this Hit Alt-n to go to the next page. Select "Use custom id…", then enter a custom ID, I use the machine's name. Hit finish, and MythTV should start up with all default settings. For the lenovo S12, the first thing you want to do is to set Playback profiles to "Normal". From Setup TV Settings Playback Alt-N(ext) Alt-N(ext) Playback Profiles (3/8) : Change Current Video Playback Profile to "Normal". You can fiddle with this setting later. For the lenovo S12, the second thing is to get the sound going. From Setup General Alt-N(ext) Alt-N(ext) Alt-N(ext) Audio System: The top of the screen is a button title "Scan for audio devices", move the highlight there and press the Space bar. Then Tab down to Audio Output Device: and left-right arrow until "ALSA:hw:Card=Intel,DEV=0" is selected. Then Alt-N(ext) until "Finish". Now you should have sound. You should now have MythTV working nicely on the Lenovo S12 Notes about wireless: Running Lenovo S12 on wireless is demanding on both power and WiFi connection. Best results will be obtained when running on power and wired connection. I run my S12 on wireless, actually two serial connections with two access points, something that is not easy to achieve. Here Mythbuntu client-server (in den) <? wireless link 1 <?office LAN? wireless link 2 <? Lenovo S12 Ubuntu 11.10 The office LAN is fixed IP behind an Untangle firewall router. There is another MythTV client on Ubuntu 10.10 computer in the office (which has always worked well). ProblemMythbuntu\Win7 client hangs with frozen frames, short segment of audio repeating. Hardware Rosewill RNX-G300EX IEEE 802.11b/g PCI Wireless Card on client-server 2 Linksys WRT54GL wireless broadband routers on LAN for link1 and link 2 WRT54GL FirmwareDD-WRT v24-sp2(07/22/09) voip set up to act as an access point. Note? many people advised this was an unworkable scheme, and in probably most cases it will be. Solution? Set up DD-WRT with the following Wireless settings… Basic Channel: Different fixed channels at least 4 difference, I use 6 & 11 Basic Sensitivity Range (ACK timing): 50 MAC filter use filter: Enable, Selected Permit only clients listed to access… Requires adding MAC addresses in "Edit MAC Filter List" This causes the 54GL's to ignore any but the listed MAC address, down side, no "guest" capability. Advanced Basic rate: All Advanced CTS Protection Mode: Off Advanced Frame Burst: Enable Advanced Max associate clients: 4 for client link 2, 1 for client-server link 1 Advanced AP isolation: Enable Advanced Preamble: Short Advanced Afterburner: On Advanced Wireless GUI access: Off Advanced WMM support: Off Other settings: default for supplied firmware. Why I suspect this worked? The 54GL Access Points's with the firmware's setting are set to handle a multiple client, wide area situation. With these mods I reconfigured them for a small area, few client situation, disabling Advanced WMM probably the most important. In addition, the client mythtv when used all other users of its access point are turned off except for a Skype phone. Also, the client-server is set up to allow other connections though it's LAN connection, and these are used to connect the TV and disc players, not used when client is being used.

    Read the article

  • XNA 3D model collision is inaccurate

    - by Daniel Lopez
    I am creating a classic game in 3d that deals with asteriods and you have to shoot them and avoid being hit from them. I can generate the asteroids just fine and the ship can shoot bullets just fine. But the asteroids always hit the ship even it doesn't look they are even close. I know 2D collision very well but not 3D so can someone please shed some light to my problem. Thanks in advance. Code For ModelRenderer: using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace _3D_Asteroids { class ModelRenderer { private float aspectratio; private Model model; private Vector3 camerapos; private Vector3 modelpos; private Matrix rotationy; float radiansy = 0; private bool isalive; public ModelRenderer(Model m, float AspectRatio, Vector3 initial_pos, Vector3 initialcamerapos) { isalive = true; model = m; if (model.Meshes.Count == 0) { throw new Exception("Invalid model because it contains zero meshes!"); } modelpos = initial_pos; camerapos = initialcamerapos; aspectratio = AspectRatio; return; } public float RadiusOfSphere { get { return model.Meshes[0].BoundingSphere.Radius; } } public BoundingBox BoxBounds { get { return BoundingBox.CreateFromSphere(model.Meshes[0].BoundingSphere); } } public BoundingSphere SphereBounds { get { return model.Meshes[0].BoundingSphere; } } public Vector3 CameraPosition { set { camerapos = value; } get { return camerapos; } } public bool IsAlive { get { return isalive; } } public Vector3 ModelPosition { set { modelpos = value; } get { return modelpos; } } public void RotateY(float radians) { radiansy += radians; rotationy = Matrix.CreateRotationY(radiansy); } public Matrix RotationY { set { rotationy = value; } get { return rotationy; } } public float AspectRatio { set { aspectratio = value; } get { return aspectratio; } } public void Kill() { isalive = false; } public void Draw(float scale) { Matrix world; if (rotationy == new Matrix(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)) { world = Matrix.CreateScale(scale) * Matrix.CreateTranslation(modelpos); } else { world = rotationy * Matrix.CreateScale(scale) * Matrix.CreateTranslation(modelpos); } Matrix view = Matrix.CreateLookAt(camerapos, Vector3.Zero, Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), this.AspectRatio, 1f, 100000f); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = world; effect.View = view; effect.Projection = projection; } mesh.Draw(); } } public void Draw() { Matrix world; if (rotationy == new Matrix(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)) { world = Matrix.CreateTranslation(modelpos); } else { world = rotationy * Matrix.CreateTranslation(modelpos); } Matrix view = Matrix.CreateLookAt(camerapos, Vector3.Zero, Vector3.Up); Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), this.AspectRatio, 1f, 100000f); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = world; effect.View = view; effect.Projection = projection; } mesh.Draw(); } } } Code For Game1: using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace _3D_Asteroids { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; int score = 0, lives = 5; SpriteBatch spriteBatch; GameState gstate = GameState.OnMenuScreen; Menu menu = new Menu(Color.Yellow, Color.White); SpriteFont font; Texture2D background; ModelRenderer ship; Model b, a; List<ModelRenderer> bullets = new List<ModelRenderer>(); List<ModelRenderer> asteriods = new List<ModelRenderer>(); float time = 0.0f; int framecount = 0; SoundEffect effect; public Game1() { graphics = new GraphicsDeviceManager(this); graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 796; graphics.ApplyChanges(); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); font = Content.Load<SpriteFont>("Fonts\\Lucida Console"); background = Content.Load<Texture2D>("Textures\\B1_stars"); Model p1 = Content.Load<Model>("Models\\p1_wedge"); b = Content.Load<Model>("Models\\pea_proj"); a = Content.Load<Model>("Models\\asteroid1"); effect = Content.Load<SoundEffect>("Audio\\tx0_fire1"); ship = new ModelRenderer(p1, GraphicsDevice.Viewport.AspectRatio, new Vector3(0, 0, 0), new Vector3(0, 0, 9000)); } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Update(GameTime gameTime) { KeyboardState state = Keyboard.GetState(PlayerIndex.One); switch (gstate) { case GameState.OnMenuScreen: { if (state.IsKeyDown(Keys.Enter)) { switch (menu.SelectedChoice) { case MenuChoices.Play: { gstate = GameState.GameStarted; break; } case MenuChoices.Exit: { this.Exit(); break; } } } if (state.IsKeyDown(Keys.Down)) { menu.MoveSelectedMenuChoiceDown(gameTime); } else if(state.IsKeyDown(Keys.Up)) { menu.MoveSelectedMenuChoiceUp(gameTime); } else { menu.KeysReleased(); } break; } case GameState.GameStarted: { foreach (ModelRenderer bullet in bullets) { if (bullet.ModelPosition.X < (ship.ModelPosition.X + 4000) && bullet.ModelPosition.Z < (ship.ModelPosition.X + 4000) && bullet.ModelPosition.X > (ship.ModelPosition.Z - 4000) && bullet.ModelPosition.Z > (ship.ModelPosition.Z - 4000)) { bullet.ModelPosition += (bullet.RotationY.Forward * 120); } else if (collidedwithasteriod(bullet)) { bullet.Kill(); } else { bullet.Kill(); } } foreach (ModelRenderer asteroid in asteriods) { if (ship.SphereBounds.Intersects(asteroid.BoxBounds)) { lives -= 1; asteroid.Kill(); // This always hits no matter where the ship goes. } else { asteroid.ModelPosition -= (asteroid.RotationY.Forward * 50); } } for (int index = 0; index < asteriods.Count; index++) { if (asteriods[index].IsAlive == false) { asteriods.RemoveAt(index); } } for (int index = 0; index < bullets.Count; index++) { if (bullets[index].IsAlive == false) { bullets.RemoveAt(index); } } if (state.IsKeyDown(Keys.Left)) { ship.RotateY(0.1f); if (state.IsKeyDown(Keys.Space)) { if (time < 17) { firebullet(); //effect.Play(); } } else { time = 0; } } else if (state.IsKeyDown(Keys.Right)) { ship.RotateY(-0.1f); if (state.IsKeyDown(Keys.Space)) { if (time < 17) { firebullet(); //effect.Play(); } } else { time = 0; } } else if (state.IsKeyDown(Keys.Up)) { ship.ModelPosition += (ship.RotationY.Forward * 50); if (state.IsKeyDown(Keys.Space)) { if (time < 17) { firebullet(); //effect.Play(); } } else { time = 0; } } else if (state.IsKeyDown(Keys.Space)) { time += gameTime.ElapsedGameTime.Milliseconds; if (time < 17) { firebullet(); //effect.Play(); } } else { time = 0.0f; } if ((framecount % 60) == 0) { createasteroid(); framecount = 0; } framecount++; break; } } base.Update(gameTime); } void firebullet() { if (bullets.Count < 3) { ModelRenderer bullet = new ModelRenderer(b, GraphicsDevice.Viewport.AspectRatio, ship.ModelPosition, new Vector3(0, 0, 9000)); bullet.RotationY = ship.RotationY; bullets.Add(bullet); } } void createasteroid() { if (asteriods.Count < 2) { Random random = new Random(); float z = random.Next(-13000, -11000); float x = random.Next(-9000, -8000); Random random2 = new Random(); int degrees = random.Next(0, 45); float radians = MathHelper.ToRadians(degrees); ModelRenderer asteroid = new ModelRenderer(a, GraphicsDevice.Viewport.AspectRatio, new Vector3(x, 0, z), new Vector3(0,0, 9000)); asteroid.RotateY(radians); asteriods.Add(asteroid); } } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); switch (gstate) { case GameState.OnMenuScreen: { spriteBatch.Begin(); spriteBatch.Draw(background, Vector2.Zero, Color.White); menu.DrawMenu(ref spriteBatch, font, new Vector2(GraphicsDevice.Viewport.Width / 2, GraphicsDevice.Viewport.Height / 2) - new Vector2(50f), 100f); spriteBatch.End(); break; } case GameState.GameStarted: { spriteBatch.Begin(); spriteBatch.Draw(background, Vector2.Zero, Color.White); spriteBatch.DrawString(font, "Score: " + score.ToString() + "\nLives: " + lives.ToString(), Vector2.Zero, Color.White); spriteBatch.End(); ship.Draw(); foreach (ModelRenderer bullet in bullets) { bullet.Draw(); } foreach (ModelRenderer asteroid in asteriods) { asteroid.Draw(0.1f); } break; } } base.Draw(gameTime); } bool collidedwithasteriod(ModelRenderer bullet) { foreach (ModelRenderer asteroid in asteriods) { if (bullet.SphereBounds.Intersects(asteroid.BoxBounds)) { score += 10; asteroid.Kill(); return true; } } return false; } } } }

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >