Search Results

Search found 20799 results on 832 pages for 'long integer'.

Page 249/832 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Change column height as other column gets longer

    - by Infiniti Fizz
    Hi, I have tried a few things to solve this problem but I can't seem to get it working. The problem is that I have 2 columns as the main part of my website, right and left. On some pages, there is a lot of text in the left column, therefore it is very long, the problem is that the right column doesn't elongate with the left column. Both columns have the same background colour and a footer s displayed across the width of both columns after the columns finish. My first thought was to put both columns inside a div which would have the same background colour as them and therefore if the left column became 1500px long in total and the right column stayed at around 600px (due to the elements inside it) then this wouldn't show as the new, outer div would elongate along with the left column. But for some reason this didn't work. Could it be because the columns are floated? Does anyone have any other ideas? Here is the website (Obviously not finished yet): Beansheaf Hotel I have chosen a page where there is a lot of text in the left column so the problem is apparent. Thanks in advance, InfinitiFizz

    Read the article

  • How to convert an NSString to an unsigned int in Cocoa?

    - by Dave Gallagher
    My application gets handed an NSString containing an unsigned int. NSString doesn't have an [myString unsignedIntegerValue]; method. I'd like to be able to take the value out of the string without mangling it, and then place it inside an NSNumber. I'm trying to do it like so: NSString *myUnsignedIntString = [self someMethodReturningAString]; NSInteger myInteger = [myUnsignedIntString integerValue]; NSNumber *myNSNumber = [NSNumber numberWithInteger:myInteger]; // ...put |myNumber| in an NSDictionary, time passes, pull it out later on... unsigned int myUnsignedInt = [myNSNumber unsignedIntValue]; Will the above potentially "cut off" the end of a large unsigned int since I had to convert it to NSInteger first? Or does it look OK to use? If it'll cut off the end of it, how about the following (a bit of a kludge I think)? NSString *myUnsignedIntString = [self someMethodReturningAString]; long long myLongLong = [myUnsignedIntString longLongValue]; NSNumber *myNSNumber = [NSNumber numberWithLongLong:myLongLong]; // ...put |myNumber| in an NSDictionary, time passes, pull it out later on... unsigned int myUnsignedInt = [myNSNumber unsignedIntValue]; Thanks for any help you can offer! :)

    Read the article

  • How do I solve this indexOutOfBoundsException in my server send/receive thread?

    - by Stefan Schouten
    I am creating a multiplayer game in Java with a server and multiple clients. Everything runs perfectly, until I press the Kick-button in the server to kick a client. Error at receive thread of server, after kicking the first person who joined out of three: java.lang.IndexOutOfBoundsException: Index: 2, Size: 2 at java.util.ArrayList.rangeCheck(ArrayList.java:604) at java.util.ArrayList.get(ArrayList.java:382) > at networktest.Server$3.run(Server.java:186) at java.lang.Thread.run(Thread.java:722) The pointed line is the ois = new ObjectInputStream where I send datatype. The server kicks the first person perfectly, but removes the second one in the list too, with an error of java.lang.ClassCastException. server receive: private static Thread receive = new Thread() { @Override public void run() { ObjectInputStream ois; while (true) { for (int i = 0; i < list_sockets.size(); i++) { try { ois = new ObjectInputStream(list_sockets.get(i).getInputStream()); int receive_state = (Integer) ois.readObject(); // receive state ois = new ObjectInputStream(list_sockets.get(i).getInputStream()); byte datatype = (byte) ois.readObject(); // receive datatype if(datatype == 2){ ois = new ObjectInputStream(list_sockets.get(i).getInputStream()); ChatLine chatLine = (ChatLine) ois.readObject(); // receive ChatLine } else if (datatype == 0){ ois = new ObjectInputStream(list_sockets.get(i).getInputStream()); DataPackage dp = (DataPackage) ois.readObject(); // receive dp list_data.set(i, dp); } if (receive_state == 1) // Client Disconnected by User { disconnectClient(i); i--; } } catch (Exception ex) // Client Disconnected (Client Didn't Notify Server About Disconnecting) { System.err.println("Error @ receive:"); ex.printStackTrace(); disconnectClient(i); i--; } } try { this.sleep(3); } catch (InterruptedException ex) { Logger.getLogger(Server.class.getName()).log(Level.SEVERE, null, ex); } } } }; user send: Thread send = new Thread() { public void run() { ObjectOutputStream oos; byte datatype = 0; while (connected){ if (socket != null){ try { DataPackage dp = new DataPackage(); dp.x = Client.player.x; dp.y = Client.player.y; dp.username = username; dp.charType = charType; dp.walking = (byte)Client.player.walking; if (Client.outputChatLine.line != null) datatype = 2; else { datatype = 0; } oos = new ObjectOutputStream(socket.getOutputStream()); oos.writeObject(Integer.valueOf(Client.this.state)); // send state oos = new ObjectOutputStream(socket.getOutputStream()); oos.writeObject(Byte.valueOf(datatype)); // send datatype if (datatype == 2) { oos.reset(); oos.writeObject(Client.outputChatLine); Client.outputChatLine = new ChatLine(); } else { oos = new ObjectOutputStream(socket.getOutputStream()); oos.writeObject(dp); } if (Client.this.state == 1) { connected = false; socket = null; JOptionPane.showMessageDialog(null, "Client Disconnected", "Info", 1); System.exit(0); } } catch (Exception ex){} } try { this.sleep(2); } catch (InterruptedException ex) { Logger.getLogger(Client.class.getName()).log(Level.SEVERE, null, ex); } } } }; disconnect client method: public static void disconnectClient(int index) { try { list_clients_model.removeElementAt(index); list_client_states.remove(index); list_data.remove(index); list_sockets.remove(index); } catch (Exception ex) {} } Does anyone know how to solve this?

    Read the article

  • Asp.net mvc - trying to display images pulled from db \

    - by Trey Carroll
    //Inherits="System.Web.Mvc.ViewPage<FilmFestWeb.Models.ListVideosViewModel>" <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>ListVideos</h2> <% foreach(BusinessObjects.Video vid in Model.VideoList){%> <div class="videoBox"> <%= Html.Encode(vid.Name) %> <img src="<%= vid.ThumbnailImage %>" /> </div> <%} %> </asp:Content> //ListVideosViewModel public class ListVideosViewModel { public IList<Video> VideoList { get; set; } } //Video public class Video { public long VideoId { get; set; } public long TeamId { get; set; } public string Name { get; set; } public string Tags { get; set; } public string TeamMembers { get; set; } public string TranscriptFileName { get; set; } public string VideoFileName { get; set; } public int TotalNumRatings { get; set; } public int CumulativeTotalScore { get; set; } public string VideoUri { get; set; } public Image ThumbnailImage { get; set; } } I am getting the "red x" that I usually associate with image file not found. I have verified that my database table shows after the stored proc that uploads the image executes. Any insight or advice would be greatly appreciated.

    Read the article

  • Simple typemap example in swig java

    - by celil
    I am trying to wrap a native C++ library using swig, and I am stuck at trying to convert time_t in C, to long in Java. I have successfully used swig with python, but so far I am unable to get the above typemap to work in Java. In python it looks like this %typemap(in) time_t { if (PyLong_Check($input)) $1 = (time_t) PyLong_AsLong($input); else if (PyInt_Check($input)) $1 = (time_t) PyInt_AsLong($input); else if (PyFloat_Check($input)) $1 = (time_t) PyFloat_AsDouble($input); else { PyErr_SetString(PyExc_TypeError,"Expected a large number"); return NULL; } } %typemap(out) time_t { $result = PyLong_FromLong((long)$1); } I guess the in map from Java to C would be: %typemap(in) time_t { $1 = (time_t) $input; } How would I complete the out map from C to Java? %typemap(out) time_t ??? Would I need typemaps like the ones below? %typemap(jni) %typemap(jtype) %typemap(jstype) I need this in order to wrap C functions like this: time_t manipulate_time (time_t dt);

    Read the article

  • Java: attributes order in .jsp getting inversed

    - by NoozNooz42
    Every single time I've read about the meta tags, the attribute where in this order for the description: <meta name="description" content="..." /> First name, then content. It's also like that in the Google Webmaster documentation. Basically, it's like that everywhere. Now in a .jsp (in XML notation) I've got the following: <meta name="description" content="${metadesc}"/> So it's name first, then content. Yet on the generated webpage, I get this: <meta content="...(200 chars or so here making it a very long line)..." name="description"/> Somehow the attributes have been inversed. Because the content follows the official W3C and Google recommendations, the content is a bit less than 200 characters long, which makes it a major pain to "visually verify" that the name attribute is correctly there (I've got to scroll). Anyway... Why are these attribute not appearing in the order defined in the .jsp? Can I force them to appear in the same order as I wrote them in my .jsp? I realize the resulting tag may be valid... But I can also imagine a lot of very creative ways to have valid tags which users would be very upset about. Does this make any sense to inverse these attributes? EDIT wow, just wow... If I invert the attributes in my .jsp (that is, writing them in the "wrong" order), then they appear as I want them to appear in the generated web page. (Tomcat 6.0.26 btw)

    Read the article

  • Python print statement prints nothing with a carriage return

    - by Jonathan Sternberg
    I'm trying to write a simple tool that reads files from disc, does some image processing, and returns the result of the algorithm. Since the program can sometimes take awhile, I like to have a progress bar so I know where it is in the program. And since I don't like to clutter up my command line and I'm on a Unix platform, I wanted to use the '\r' character to print the progress bar on only one line. But when I have this code here, it prints nothing. # Files is a list with the filenames for i, f in enumerate(files): print '\r%d / %d' % (i, len(files)), # Code that takes a long time I have also tried: print '\r', i, '/', len(files), Now just to make sure this worked in python, I tried this: heartbeat = 1 while True: print '\rHello, world', heartbeat, heartbeat += 1 This code works perfectly. What's going on? My understanding of carriage returns on Linux was that it would just move the line feed character to the beginning and then I could overwrite old text that was written previously, as long as I don't print a newline anywhere. This doesn't seem to be happening though. Also, is there a better way to display a progress bar in a command line than what I'm current trying to do?

    Read the article

  • [C#][XNA 3.1] How can I host two different XNA windows inside one Windows Form?

    - by secutos
    I am making a Map Editor for a 2D tile-based game. I would like to host two XNA controls inside the Windows Form - the first to render the map; the second to render the tileset. I used the code here to make the XNA control host inside the Windows Form. This all works very well - as long as there is only one XNA control inside the Windows Form. But I need two - one for the map; the second for the tileset. How can I run two XNA controls inside the Windows Form? While googling, I came across the terms "swap chain" and "multiple viewports", but I can't understand them and would appreciate support. Just as a side note, I know the XNA control example was designed so that even if you ran 100 XNA controls, they would all share the same GraphicsDevice - essentially, all 100 XNA controls would share the same screen. I tried modifying the code to instantiate a new GraphicsDevice for each XNA control, but the rest of the code doesn't work. The code is a bit long to post, so I won't post it unless someone needs it to be able to help me. Thanks in advance.

    Read the article

  • Hibernate Auto-Increment Setup

    - by dharga
    How do I define an entity for the following table. I've got something that isn't working and I just want to see what I'm supposed to do. USE [BAMPI_TP_dev] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[MemberSelectedOptions]( [OptionId] [int] NOT NULL, [SeqNo] [smallint] IDENTITY(1,1) NOT NULL, [OptionStatusCd] [char](1) NULL ) ON [PRIMARY] GO SET ANSI_PADDING OFF This is what I have already that isn't working. @Entity @Table(schema="dbo", name="MemberSelectedOptions") public class MemberSelectedOption extends BampiEntity implements Serializable { @Embeddable public static class MSOPK implements Serializable { private static final long serialVersionUID = 1L; @Column(name="OptionId") int optionId; @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name="SeqNo", unique=true, nullable=false) BigDecimal seqNo; //Getters and setters here... } private static final long serialVersionUID = 1L; @EmbeddedId MSOPK pk = new MSOPK(); @Column(name="OptionStatusCd") String optionStatusCd; //More Getters and setters here... } I get the following ST. [5/25/10 15:49:40:221 EDT] 0000003d JDBCException E org.slf4j.impl.JCLLoggerAdapter error Cannot insert explicit value for identity column in table 'MemberSelectedOptions' when IDENTITY_INSERT is set to OFF. [5/25/10 15:49:40:221 EDT] 0000003d AbstractFlush E org.slf4j.impl.JCLLoggerAdapter error Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: could not insert: [com.bob.proj.ws.model.MemberSelectedOption] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:90) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2285) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2678) at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:366) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at com.bcbst.bamp.ws.dao.MemberSelectedOptionDAOImpl.saveMemberSelectedOption(MemberSelectedOptionDAOImpl.java:143) at com.bcbst.bamp.ws.common.AlertReminder.saveMemberSelectedOptions(AlertReminder.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    Read the article

  • latex padding / margin hell

    - by darren
    hi everyone I have been wrestling with a latex table for far too long. I need a table that has has centered headers, and body cells that contain text that may wrap around. Because of the wrap-around requirement, i'm using p{xxx} instead of l for specifying cell widths. The problem this causes is that cell contents are not left justified, so the look like spaced-out junk. To fix this problem I'm using \flushleft for each cell. This does left justify contents, but puts in a ton of white space above and below the contents of the cell. Is there a way to stop \flushleft (or \center for that matter) to stop adding copious amounts of verical whitespace? thanks \begin{landscape} \centering % using p{xxx} here to wrap long text instead of overflowing it \begin{longtable}{ | p{4cm} || p{3cm} | p{3cm} | p{3cm} | p{3cm} | p{3cm} |} \hline & % these are table headings. the \center is causing a ton of whitespace as well \begin{center} \textbf{HTC HD2} \end{center} & \begin{center} \textbf{Motorola Milestone} \end{center} & \begin{center} \textbf{Nokia N900} \end{center} & \begin{center} \textbf{RIM Blackberry Bold 9700} \end{center} & \begin{center} \textbf{Apple iPhone 3GS} \end{center} \\ \hline \hline % using flushleft here to left-justify, but again it is causing a ton of white space above and below cell contents. \begin{flushleft}OS / Platform \end{flushleft}& \begin{flushleft}Windows Mobile 6.5 \end{flushleft}& \begin{flushleft}Google Android 2.1 \end{flushleft}& \begin{flushleft}Maemo \end{flushleft}& \begin{flushleft}Blackberry OS 5.0 \end{flushleft}& \begin{flushleft}iPhone OS 3.1 \end{flushleft} \\ \hline

    Read the article

  • knife server create- finding lists of flavors

    - by JohnMetta
    I'm new to Chef and I think I'm missing something in reading the docs. I want to create servers using knife server create (options) but can't seem to find fully complete documentation on the options. Specifically, how do I find a mapping of server flavors to whatever knife is looking for? Given the official wiki entry for "Launch Cloud Instances with Knife," the following is an example server creation on Rackspace: knife rackspace server create 'role[webserver]' --server-name server01 --image 49 --flavor 2 Likewise, on the Knife Man Page, there are commands for EC2 server images (using --d --distro DISTRO) and for Slicehost servers (using -f --flavor FLAVOR) However, what none of the documentation I've found describes is how to translate what I want to build on Rackspace ("I want Ubuntu 10.04 LTS") to what the integer entry that knife is seeking. It strikes me that, given the lack of a description in the documentation for how to find the flavor, this should be obvious. Thus, I think I'm missing something.

    Read the article

  • Code bacteria: evolving mathematical behavior

    - by Stefano Borini
    It would not be my intention to put a link on my blog, but I don't have any other method to clarify what I really mean. The article is quite long, and it's in three parts (1,2,3), but if you are curious, it's worth the reading. A long time ago (5 years, at least) I programmed a python program which generated "mathematical bacteria". These bacteria are python objects with a simple opcode-based genetic code. You can feed them with a number and they return a number, according to the execution of their code. I generate their genetic codes at random, and apply an environmental selection to those objects producing a result similar to a predefined expected value. Then I let them duplicate, introduce mutations, and evolve them. The result is quite interesting, as their genetic code basically learns how to solve simple equations, even for values different for the training dataset. Now, this thing is just a toy. I had time to waste and I wanted to satisfy my curiosity. however, I assume that something, in terms of research, has been made... I am reinventing the wheel here, I hope. Are you aware of more serious attempts at creating in-silico bacteria like the one I programmed? Please note that this is not really "genetic algorithms". Genetic algorithms is when you use evolution/selection to improve a vector of parameters against a given scoring function. This is kind of different. I optimize the code, not the parameters, against a given scoring function.

    Read the article

  • Criteria API - How to get records based on collection count?

    - by Cosmo
    Hello Guys! I have a Question class in ActiveRecord with following fields: [ActiveRecord("`Question`")] public class Question : ObcykaniDb<Question> { private long id; private IList<Question> relatedQuestions; [PrimaryKey("`Id`")] private long Id { get { return this.id; } set { this.id = value; } } [HasAndBelongsToMany(typeof(Question), ColumnRef = "ChildId", ColumnKey = "ParentId", Table = "RelatedQuestion")] private IList<Question> RelatedQuestions { get { return this.relatedQuestions; } set { this.relatedQuestions = value; } } } How do I write a DetachedCriteria query to get all Questions that have at least 5 related questions (count) in the RelatedQuestions collection? For now this gives me strange results: DetachedCriteria dCriteria = DetachedCriteria.For<Question>() .CreateCriteria("RelatedQuestions") .SetProjection(Projections.Count("Id")) .Add(Restrictions.EqProperty(Projections.Id(), "alias.Id")); DetachedCriteria dc = DetachedCriteria.For<Question>("alias").Add(Subqueries.Le(5, dCriteria)); IList<Question> results = Question.FindAll(dc); Any ideas what I'm doing wrong?

    Read the article

  • Issue with XSL Criteria

    - by Rachel
    I am using the below piece of XSL to select the id of the text nodes whose content has a given index. This index value in input will be relative to a spcified node whose id value is known. The criteria to select the text node is, The text node content should contain a index say 'i' relative to node say 'n' whose id value i know. 'i' and 'id of n' is got as index and nodeName from the input param as seen in the xsl. Node 'd1e5' has the text content whose index ranges from 1 to 33. When i give an index value greater than 33 i want the below criteria to fail but it does not, [sum((preceding::text(), .)[normalize-space()][. >> //*[@id=$nodeName]]/string-length(.)) ge $index] Input xml: <?xml version="1.0" encoding="UTF-8"?> <html xmlns="http://www.w3.org/1999/xhtml" id="d1e1"> <head id="d1e3"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title id="d1e5">Every document must have a title</title> </head> <body id="d1e9"> <h1 id="d1e11" align="center">Very Important Heading</h1> <p id="d1e13">Since this is just a sample, I won't put much text here.</p> </body> </html> XSL code used: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xsd="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xsd" version="2.0"> <xsl:param name="insert-file" as="node()+"> <insert-data><data index="1" nodeName="d1e5"></data><data index="34" nodeName="d1e5"></data></insert-data> </xsl:param> <xsl:param name="nodeName" as="xsd:string" /> <xsl:variable name="main-root" as="document-node()" select="/"/> <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each select="$insert-file/insert-data/data"> <xsl:sort select="xsd:integer(@index)"/> <xsl:variable name="index" select="xsd:integer(@index)" /> <xsl:variable name="nodeName" select="@nodeName" /> <data text-id="{generate-id($main-root/descendant::text()[sum((preceding::text(), .)[normalize-space()][. >> //*[@id=$nodeName]]/string-length(.)) ge $index][1])}"> </data> </xsl:for-each> </xsl:variable> <xsl:template match="/"> <Output> <xsl:copy-of select="$insert-data" /> </Output> </xsl:template> </xsl:stylesheet> Actual output: <?xml version="1.0" encoding="UTF-8"?> <Output> <data text-id="d1t8"/> <data text-id="d1t14"/> </Output> Expected output: <?xml version="1.0" encoding="UTF-8"?> <Output> <data text-id="d1t8"/> </Output> This solution works fine if index lies between 1 and 33. Any index value greater that 33 causes incorrect text nodes to get selected. I could not understand why the text node 'd1t14' is getting selected. Please share your thoughts.

    Read the article

  • Need to call original function from detoured function

    - by peachykeen
    I'm using Detours to hook into an executable's message function, but I need to run my own code and then call the original code. From what I've seen in the Detours docs, it definitely sounds like that should happen automatically. The original function prints a message to the screen, but as soon as I attach a detour it starts running my code and stops printing. The original function code is roughly: void CGuiObject::AppendMsgToBuffer(classA, unsigned long, unsigned long, int, classB); My function is: void CGuiObject_AppendMsgToBuffer( [same params, with names] ); I know the memory position the original function resides in, so using: DWORD OrigPos = 0x0040592C; DetourAttach( (void*)OrigPos, CGuiObject_AppendMsgToBuffer); gets me into the function. This code works almost perfectly: my function is called with the proper parameters. However, execution leaves my function and the original code is not called. I've tried jmping back in, but that crashes the program (I'm assuming the code Detours moved to fit the hook is responsible for the crash). Edit: I've managed to fix the first issue, with no returning to program execution. By calling the OrigPos value as a function, I'm able to go to the "trampoline" function and from there on to the original code. However, somewhere along the lines the registers are changing and that is causing the program to crash with a segfault as soon as I get back into the original code.

    Read the article

  • Excel VBA Macro for Pivot Table with Dynamic Data Range

    - by John Ziebro
    CODE IS WORKING! THANKS FOR THE HELP! I am attempting to create a dynamic pivot table that will work on data that varies in the number of rows. Currently, I have 28,300 rows, but this may change daily. Example of data format as follows: Case Number Branch Driver 1342 NYC Bob 4532 PHL Jim 7391 CIN John 8251 SAN John 7211 SAN Mary 9121 CLE John 7424 CIN John Example of finished table: Driver NYC PHL CIN SAN CLE Bob 1 0 0 0 0 Jim 0 1 0 0 0 John 0 0 2 1 1 Mary 0 0 0 1 0 Code as follows: Sub CreateSummaryReportUsingPivot() ' Use a Pivot Table to create a static summary report ' with model going down the rows and regions across Dim WSD As Worksheet Dim PTCache As PivotCache Dim PT As PivotTable Dim PRange As Range Dim FinalRow As Long Dim FinalCol As Long Set WSD = Worksheets("PivotTable") 'Name active worksheet as "PivotTable" ActiveSheet.Name = "PivotTable" ' Delete any prior pivot tables For Each PT In WSD.PivotTables PT.TableRange2.Clear Next PT ' Define input area and set up a Pivot Cache FinalRow = WSD.Cells(Application.Rows.Count, 1).End(xlUp).Row FinalCol = WSD.Cells(1, Application.Columns.Count). _ End(xlToLeft).Column Set PRange = WSD.Cells(1, 1).Resize(FinalRow, FinalCol) Set PTCache = ActiveWorkbook.PivotCaches.Add(SourceType:= _ xlDatabase, SourceData:=PRange) ' Create the Pivot Table from the Pivot Cache Set PT = PTCache.CreatePivotTable(TableDestination:=WSD. _ Cells(2, FinalCol + 2), TableName:="PivotTable1") ' Turn off updating while building the table PT.ManualUpdate = True ' Set up the row fields PT.AddFields RowFields:="Driver", ColumnFields:="Branch" ' Set up the data fields With PT.PivotFields("Case Number") .Orientation = xlDataField .Function = xlCount .Position = 1 End With With PT .ColumnGrand = False .RowGrand = False .NullString = "0" End With ' Calc the pivot table PT.ManualUpdate = False PT.ManualUpdate = True End Sub

    Read the article

  • What is a Windows scripting language that: does not rely on .NET and offers the most OOP support and

    - by jJack
    What is a Windows scripting language that: does not rely on .NET and offers the most OOP support and has simplest deployment? It doesn't necessarily need to be a scripting language; It can be in the form of a compiled executable, however it needs to be self contained--only ONE file, no DLL's and it cannot be declared to "include" other files. I cannot rely on the user having any .NET installed and it needs to be able to run on Windows 7 64 bit. By "most OOP support", I basically mean anything that has better OOP support than VBScript. A little context: Everything I have done thus far is in VBScript and writes a bunch of data into an .html file, which in the end is to be viewed by Internet Explorer. It also zips up a bunch of directories and files. It heavily relies on accessing the registry, file-system, and WMI (I can probably do without accessing WMI though, as long as I have good registry access). I can bring myself to code in any language so long as it meets me ridonkulous requirements stated above. I look forward to some good answers from those more experienced than I.

    Read the article

  • BN_hex2bn magically segfaults in openSSL

    - by xunil154
    Greetings, this is my first post on stackoverflow, and i'm sorry if its a bit long. I'm trying to build a handshake protocol for my own project and am having issues with the server converting the clients RSA's public key to a Bignum. It works in my clent code, but the server segfaults when attempting to convert the hex value of the clients public RSA to a bignum. I have already checked that there is no garbidge before or after the RSA data, and have looked online, but i'm stuck. header segment: typedef struct KEYS { RSA *serv; char* serv_pub; int pub_size; RSA *clnt; } KEYS; KEYS keys; Initializing function: // Generates and validates the servers key /* code for generating server RSA left out, it's working */ //Set client exponent keys.clnt = 0; keys.clnt = RSA_new(); BN_dec2bn(&keys.clnt->e, RSA_E_S); // RSA_E_S contains the public exponent Problem code (in Network::server_handshake): // *Recieved an encrypted message from the network and decrypt into 'buffer' (1024 byte long)* cout << "Assigning clients RSA" << endl; // I have verified that 'buffer' contains the proper key if (BN_hex2bn(&keys.clnt->n, buffer) < 0) { Error("ERROR reading server RSA"); } cout << "clients RSA has been assigned" << endl; The program segfaults at BN_hex2bn(&keys.clnt->n, buffer) with the error (valgrind output) Invalid read of size 8 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) by 0x40F23E: Network::server_handshake() (Network.cpp:177) by 0x40EF42: Network::startNet() (Network.cpp:126) by 0x403C38: main (server.cpp:51) Address 0x20 is not stack'd, malloc'd or (recently) free'd Process terminating with default action of signal 11 (SIGSEGV) Access not within mapped region at address 0x20 at 0x50DBF9F: BN_hex2bn (in /usr/lib/libcrypto.so.0.9.8) And I don't know why it is, Im using the exact same code in the client program, and it works just fine. Any input is greatly appriciated!

    Read the article

  • Unable to add separator in list view

    - by Suru
    This is my code @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.email_list_main); emailResults = new ArrayList<GetEmailFromDatabase>(); //int[] colors = {0,0xFFFF0000,0}; //getListView().setDivider(new GradientDrawable(Orientation.RIGHT_LEFT, colors)); //getListView().setDividerHeight(2); emailListFeedAdapter = new EmailListFeedAdapter(this, R.layout.email_listview_row, emailResults); setListAdapter(this.emailListFeedAdapter); getResults(); if(emailResults != null && emailResults.size() > -1){ emailListFeedAdapter.notifyDataSetChanged(); for(int i=0;i< emailResults.size();i++){ try { Here I getting email Sent date emailListFeedAdapter.add( emailResults.get(i)); datetime_text1 = emailResults.get(i).getDate(); formatter1 = new SimpleDateFormat(); formatter1 = DateFormat.getDateInstance((DateFormat.MEDIUM)); Calendar currentDate1 = Calendar.getInstance(); Item_Date1 = formatter1.parse(datetime_text1); current_Date1 = formatter1.format(currentDate1.getTime()); current_System_Date1 = formatter1.parse(current_Date1); currentDate1.add(Calendar.DATE, -1); yesterdaydate = formatter1.format(currentDate1.getTime()); yeaterday_Date = formatter1.parse(yesterdaydate); currentDate1.add(Calendar.DATE, -2); threeDaysback = formatter1.format(currentDate1.getTime()); Three_Days_Back = formatter1.parse(threeDaysback); Here I am comparing current date with list view item date, and here is my problem, dates are matching but it is not entering in if condition I tried in so many ways but nothing worked the code for separator is bellow. if(Item_Date.compareTo(current_System_Date)==0){ if(index1){ emailListFeedAdapter.addSeparatorItem("SEPARATOR"); //i--; index1=false; } } else if(yeaterday_Date.compareTo(Item_Date1)==0){ if(index2){ emailListFeedAdapter.addSeparatorItem("SEPARATOR"); //i--; index2 = false; } } else if(Item_Date1.compareTo(Three_Days_Back)==0){ if(index3){ emailListFeedAdapter.addSeparatorItem("SEPARATOR"); //i--; index3 = false; } } } catch (ParseException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } } In EmailListFeedAdapter private TreeSet<Integer> mSeparatorsSet = new TreeSet<Integer>(); public void addSeparatorItem(final String item) { //itemss.add(emailResults.get(0)); // save separator position mSeparatorsSet.add(itemss.size() - 1); notifyDataSetChanged(); } @Override public int getItemViewType(int position) { return mSeparatorsSet.contains(position) ? TYPE_SEPARATOR : TYPE_ITEM; } holder = new ViewHolder(); switch (type) { case TYPE_ITEM: emailView= inflater.inflate(R.layout.email_listview_row, null); break; case TYPE_SEPARATOR: emailView= inflater.inflate(R.layout.item2, null); holder.textView = (TextView)emailView.findViewById(R.id.textSeparator); emailView.setTag(holder); holder.textView.setText("SEPARATOR"); break; } Here is ViewHolder class public static class ViewHolder { public TextView textView; } if anybody knows then please tell me where I am doing wrong. Thanx

    Read the article

  • What is the Difference between GC.GetTotalMemory(false) and GC.GetTotalMemory(true)

    - by somaraj
    Hi, Could some one tell me the difference between GC.GetTotalMemory(false) and GC.GetTotalMemory(true); I have a small program and when i compared the results the first loop gives an put put < loop count 0 Diff = 32 for GC.GetTotalMemory(true); and < loop count 0 Diff = 0 for GC.GetTotalMemory(false); but shouldnt it be the otherway ? Smilarly rest of the loops prints some numbers ,which are different for both case. what does this number indicate .why is it changing as the loop increase. struct Address { public string Streat; } class Details { public string Name ; public Address address = new Address(); } class emp :IDisposable { public Details objb = new Details(); bool disposed = false; #region IDisposable Members public void Dispose() { Disposing(true); } void Disposing(bool disposing) { if (!disposed) disposed = disposing; objb = null; GC.SuppressFinalize(this); } #endregion } class Program { static void Main(string[] args) { long size1 = GC.GetTotalMemory(false); emp empobj = null; for (int i = 0; i < 200;i++ ) { // using (empobj = new emp()) //------- (1) { empobj = new emp(); //------- (2) empobj.objb.Name = "ssssssssssssssssss"; empobj.objb.address.Streat = "asdfasdfasdfasdf"; } long size2 = GC.GetTotalMemory(false); Console.WriteLine( "loop count " +i + " Diff = " +(size2-size1)); } } } }

    Read the article

  • Anonymous union definition/declaration in a macro GNU vs VS2008

    - by Alan_m
    I am attempting to alter an IAR specific header file for a lpc2138 so it can compile with Visual Studio 2008 (to enable compatible unit testing). My problem involves converting register definitions to be hardware independent (not at a memory address) The "IAR-safe macro" is: #define __IO_REG32_BIT(NAME, ADDRESS, ATTRIBUTE, BIT_STRUCT) \ volatile __no_init ATTRIBUTE union \ { \ unsigned long NAME; \ BIT_STRUCT NAME ## _bit; \ } @ ADDRESS //declaration //(where __gpio0_bits is a structure that names //each of the 32 bits as P0_0, P0_1, etc) __IO_REG32_BIT(IO0PIN,0xE0028000,__READ_WRITE,__gpio0_bits); //usage IO0PIN = 0x0xAA55AA55; IO0PIN_bit.P0_5 = 0; This is my comparable "hardware independent" code: #define __IO_REG32_BIT(NAME, BIT_STRUCT)\ volatile union \ { \ unsigned long NAME; \ BIT_STRUCT NAME##_bit; \ } NAME; //declaration __IO_REG32_BIT(IO0PIN,__gpio0_bits); //usage IO0PIN.IO0PIN = 0xAA55AA55; IO0PIN.IO0PIN_bit.P0_5 = 1; This compiles and works but quite obviously my "hardware independent" usage does not match the "IAR-safe" usage. How do I alter my macro so I can use IO0PIN the same way I do in IAR? I feel this is a simple anonymous union matter but multiple attempts and variants have proven unsuccessful. Maybe the IAR GNU compiler supports anonymous unions and vs2008 does not. Thank you.

    Read the article

  • C# TcpClient, getting back the entire response from a telnet device

    - by Dan Bailiff
    I'm writing a configuration tool for a device that can communicate via telnet. The tool sends a command via TcpClient.GetStream().Write(...), and then checks for the device response via TcpClient.GetStream().ReadByte(). This works fine in unit tests or when I'm stepping through code slowly. If I run the config tool such that it performs multiple commands consecutively, then the behavior of the read is inconsistent. By inconsistent I mean sometimes the data is missing, incomplete or partially garbled. So even though the device performed the operation and responded, my code to check for the response was unreliable. I "fixed" this problem by adding a Thread.Sleep to make sure the read method waits long enough for the device to complete its job and send back the whole response (in this case 1.5 seconds was a safe amount). I feel like this is wrong, blocking everything for a fixed time, and I wonder if there is a better way to get the read method to wait only long enough to get the whole response from the device. private string Read() { if (!tcpClient.Connected) throw (new Exception("Read() failed: Telnet connection not available.")); StringBuilder sb = new StringBuilder(); do { ParseTelnet(ref sb); System.Threading.Thread.Sleep(1500); } while (tcpClient.Available > 0); return sb.ToString(); } private void ParseTelnet(ref StringBuilder sb) { while (tcpClient.Available > 0) { int input = tcpClient.GetStream().ReadByte(); switch (input) { // parse the input // ... do other things in special cases default: sb.Append((char)input); break; } } }

    Read the article

  • Covariance and Contravariance in C#

    - by edalorzo
    I will start by saying that I am Java developer learning to program in C#. As such I do comparisons of what I know with what I am learning. I have been playing with C# generics for a few hours now, and I have been able to reproduce the same things I know in Java in C#, with the exception of a couple of examples using covariance and contravariance. The book I am reading is not very good in the subject. I will certainly seek more info on the web, but while I do that, perhaps you can help me find a C# implementation for the following Java code. An example is worth a thousand words, and I was hoping that by looking a good code sample I will be able to assimilate this more rapidly. Covariance In Java I can do something like this: public static double sum(List<? extends Number> numbers) { double summation = 0.0; for(Number number : numbers){ summation += number.doubleValue(); } return summation; } I can use this code as follows: List<Integer> myInts = asList(1,2,3,4,5); List<Double> myDoubles = asList(3.14, 5.5, 78.9); List<Long> myLongs = asList(1L, 2L, 3L); double result = 0.0; result = sum(myInts); result = sum(myDoubles) result = sum(myLongs); Now I did discover that C# supports covariance/contravariance only on interfaces and as long as they have been explicitly declared to do so (out). I think I was not able to reproduce this case, because I could not find a common ancestor of all numbers, but I believe that I could have used IEnumerable to implement such thing if a common ancestor exists. Since IEnumerable is a covariant type. Right? Any thoughts on how to implement the list above? Just point me into the right direction. Is there any common ancestor of all numeric types? Contravariance The contravariance example I tried was the following. In Java I can do this to copy one list into another. public static void copy(List<? extends Number> source, List<? super Number> destiny){ for(Number number : source) { destiny.add(number); } } Then I could use it with contravariant types as follows: List<Object> anything = new ArrayList<Object>(); List<Integer> myInts = asList(1,2,3,4,5); copy(myInts, anything); My basic problem, trying to implement this in C# is that I could not find an interface that was both covariant and contravariant at the same time, as it is case of List in my example above. Maybe it can be done with two different interface in C#. Any thoughts on how to implement this? Thank you very much to everyone for any answers you can contribute. I am pretty sure I will learn a lot from any example you can provide.

    Read the article

  • Entity with Guid ID is not inserted by NHibernate

    - by DanK
    I am experimenting with NHibernate (version 2.1.0.4000) with Fluent NHibernate Automapping. My test set of entities persists fine with default integer IDs I am now trying to use Guid IDs with the entities. Unfortunately changing the Id property to a Guid seems to stop NHibernate inserting objects. Here is the entity class: public class User { public virtual int Id { get; private set; } public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual string Email { get; set; } public virtual string Password { get; set; } public virtual List<UserGroup> Groups { get; set; } } And here is the Fluent NHibernate configuration I am using: SessionFactory = Fluently.Configure() //.Database(SQLiteConfiguration.Standard.InMemory) .Database(MsSqlConfiguration.MsSql2008.ConnectionString(@"Data Source=.\SQLEXPRESS;Initial Catalog=NHibernateTest;Uid=NHibernateTest;Password=password").ShowSql()) .Mappings(m => m.AutoMappings.Add( AutoMap.AssemblyOf<TestEntities.User>() .UseOverridesFromAssemblyOf<UserGroupMappingOverride>())) .ExposeConfiguration(x => { x.SetProperty("current_session_context_class","web"); }) .ExposeConfiguration(Cfg => _configuration = Cfg) .BuildSessionFactory(); Here is the log output when using an integer ID: 16:23:14.287 [4] DEBUG NHibernate.Event.Default.DefaultSaveOrUpdateEventListener - saving transient instance 16:23:14.291 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - saving [TestEntities.User#<null>] 16:23:14.299 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - executing insertions 16:23:14.309 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - executing identity-insert immediately 16:23:14.313 [4] DEBUG NHibernate.Persister.Entity.AbstractEntityPersister - Inserting entity: TestEntities.User (native id) 16:23:14.321 [4] DEBUG NHibernate.AdoNet.AbstractBatcher - Opened new IDbCommand, open IDbCommands: 1 16:23:14.321 [4] DEBUG NHibernate.AdoNet.AbstractBatcher - Building an IDbCommand object for the SqlString: INSERT INTO [User] (FirstName, LastName, Email, Password) VALUES (?, ?, ?, ?); select SCOPE_IDENTITY() 16:23:14.322 [4] DEBUG NHibernate.Persister.Entity.AbstractEntityPersister - Dehydrating entity: [TestEntities.User#<null>] 16:23:14.323 [4] DEBUG NHibernate.Type.StringType - binding null to parameter: 0 16:23:14.323 [4] DEBUG NHibernate.Type.StringType - binding null to parameter: 1 16:23:14.323 [4] DEBUG NHibernate.Type.StringType - binding 'ertr' to parameter: 2 16:23:14.324 [4] DEBUG NHibernate.Type.StringType - binding 'tretret' to parameter: 3 16:23:14.329 [4] DEBUG NHibernate.SQL - INSERT INTO [User] (FirstName, LastName, Email, Password) VALUES (@p0, @p1, @p2, @p3); select SCOPE_IDENTITY();@p0 = NULL, @p1 = NULL, @p2 = 'ertr', @p3 = 'tretret' and here is the output when using a Guid: 16:50:14.008 [4] DEBUG NHibernate.Event.Default.DefaultSaveOrUpdateEventListener - saving transient instance 16:50:14.012 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - generated identifier: d74e1bd3-1c01-46c8-996c-9d370115780d, using strategy: NHibernate.Id.GuidCombGenerator 16:50:14.013 [4] DEBUG NHibernate.Event.Default.AbstractSaveEventListener - saving [TestEntities.User#d74e1bd3-1c01-46c8-996c-9d370115780d] This is where it silently fails, with no exception thrown or further log entries. It looks like it is generating the Guid ID correctly for the new object, but is just not getting any further than that. Is there something I need to do differently in order to use Guid IDs? Thanks, Dan.

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >