Search Results

Search found 9186 results on 368 pages for 'sort'.

Page 288/368 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • No feedback from Socket.SendAsync

    - by BowserKingKoopa
    I'm creating a socket and I'm trying to send data through it using SendAsync. My socket isn't connected to anything so I expected to get an error of some sort. However I get nothing. I get no indication that the send didn't work. If I use the synchronous Send method instead of the asynchronous SendAsync method I get an Exception stating that the socket isn't connected to anything. That makes sense to me. When using SendAsync the completed event doesn't ever fire and I get no indication that the send didn't work. So basically my question is how can I tell when SendAsync fails? Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); SocketAsyncEventArgs args = new SocketAsyncEventArgs(); args.SetBuffer(new byte[0], 0, 0); args.Completed += delegate(object sender, SocketAsyncEventArgs e) { Debug.WriteLine("async send complete"); Debug.WriteLine("SOCKET ERROR: " + e.SocketError); }; bool completedSynchronously = socket.SendAsync(args); if (completedSynchronously) { Debug.WriteLine("sync send complete"); Debug.WriteLine("socket error: " + args.SocketError); }

    Read the article

  • Collaborative filtering in MySQL ?

    - by user281434
    Hi I'm trying to develop a site that recommends items(fx. books) to users based on their preferences. So far, I've read O'Reilly's "Collective Intelligence" and numerous other online articles. They all, however, seem to deal with single instances of recommendation, for example if you like book A then you might like book B. What I'm trying to do is to create a set of 'preference-nodes' for each user on my site. Let's say a user likes book A,B and C. Then, when they add book D, I don't want the system to recommend other books based solely other users experience with book D. I wan't the system to look up similar 'preference-nodes' and recommend books based on that. Here's an example of 4 nodes: User1: 'book A'->'book B'->'book C' User2: 'book A'->'book B'->'book C'->'book D' user3: 'book X'->'book Y'->'book C'->'book Z' user4: 'book W'->'book Q'->'book C'->'book Z' So a recommendation system, as described in the material I've read, would recommend book Z to User 1, because there are two people who recommends Z in conjuction with liking C (ie. Z weighs more than D), even though a user with a similar 'preference-node', User2, would be more qualified to recommend book D because he has a more similar interest-pattern. So does any of you have any experience with this sort of thing? Is there some things I should try to read or does there exist any open source systems for this? Thanks for your time! Small edit: I think last.fm's algorithm is doing exactly what I my system to do. Using the preference-trees of people to recommmend music more personally to people. Instead of just saying "you might like B because you liked A"

    Read the article

  • weighted matching algorithm in Perl

    - by srk
    Problem : We have equal number of men and women.each men has a preference score toward each woman. So do the woman for each man. each of the men and women have certain interests. Based on the interest we calculate the preference scores. So initially we have an input in a file having x columns. First column is the person(men/woman) id. id are nothing but 0.. n numbers.(first half are men and next half woman) the remaining x-1 columns will have the interests. these are integers too. now using this n by x-1 matrix... we have come up with a n by n/2 matrix. the new matrix has all men and woman as their rows and scores for opposite sex in columns. We have to sort the scores in descending order, also we need to know the id of person related to the scores after sorting. So here i wanted to use hash table. once we get the scores we need to make up pairs.. for which we need to follow some rules. My trouble is with the second matrix of n by n/2 that needs to give information of which man/woman has how much preference on a woman/man. I need these scores sorted so that i know who is the first preferred woman/man, 2nd preferred and so on for a man/woman. I hope to get good suggestions on the data structures i use.. I prefer php or perl. Thank you in advance

    Read the article

  • PHP Performance Metrics

    - by bigstylee
    I am currently developing a PHP MVC Framework for a personal project. While I am developing the framework I am interested to see any notable performance by implementing different techniques for optimization. I have implemented a crude BenchMark class that logs mircotime. The problem is I have no frame of reference for execution times. I am very near the beginnig of this project with a database connection and a few queries but no output (bar some debugging text and BenchMark log). I have a current execution time of 0.01917 seconds. I was expecting this to be lower but as I said before I have no frame of reference. I appreciate there are many variables to take into account when juding performance but I am hoping to find some sort of metric to a) techniques to measure performance for example requests per second and b) compare results for example; how a "moderately" sized PHP application on a "standard" webserver will perform. I appreciate "moderately" and "standard" are very subjective words so perhaps a table of known execution times for a particular application (eg StackOverFlow's executing time). What are other techniques of measuring performance are there other than execution time? When looking at MVC Framework Performance Comparisom it talks about Requests Per Second (RPS). How is this calculated? I am guessing with my current execution time of 0.01917 seconds can handle 52 RPS (= 1 / 0.01917 ). This seems to be significantly lower than that quoted on the graph especially when you consider my current limited funcitonality.

    Read the article

  • Quick help refactoring Ruby Class

    - by mplacona
    I've written this class that returns feed updates, but am thinking it can be further improved. It's not glitchy or anything, but as a new ruby developer, I reckon it's always good to improve :-) class FeedManager attr_accessor :feed_object, :update, :new_entries require 'feedtosis' def initialize(feed_url) @feed_object = Feedtosis::Client.new(feed_url) fetch end def fetch @feed_object.fetch end def update @updates = fetch end def updated? @updates.new_entries.count > 0 ? true : false end def new_entries @updates.new_entries end end As you can see, it's quite simple, but the things I'm seeing that aren't quite right are: Whenever I call fetch via terminal, it prints a list with the updates, when it's really supposed return an object. So as an example, in the terminal if I do something like: client = Feedtosis::Client.new('http://stackoverflow.com/feeds') result = client.fetch I then get: <Curl::Easy http://stackoverflow.com/feeds> Which is exactly what I'd expect. However, when doing the same thing with "inniting" class with: FeedManager.new("http://stackoverflow.com/feeds") I'm getting the object returning as an array with all the items on the feed. Sure I'm doing something wrong, so any help refactoring this class will he greatly appreciated. Also, I'd like to see comments about my implementation, as well as any sort of comment to make it better would be welcome. Thanks in advance

    Read the article

  • Tailing 'Jobs' with Perl under mod_perl

    - by Matthew
    Hi everyone, I've got this project running under mod_perl shows some information on a host. On this page is a text box with a dropdown that allows users to ping/nslookup/traceroute the host. The output is shown in the text box like a tail -f. It works great under CGI. When the user requests a ping it would make an AJAX call to the server, where it essentially starts the ping with the output going to a temp file. Then subsequent ajax calls would 'tail' the file so that the output was updated until the ping finished. Once the job finished, the temp file would be removed. However, under mod_perl no matter what I do I can's stop it from creating zombie processes. I've tried everything, double forking, using IPC::Run etc. In the end, system calls are not encouraged under mod_perl. So my question is, maybe there's a better way to do this? Is there a CPAN module available for creating command line jobs and tailing output that will work under mod_perl? I'm just looking for some suggestions. I know I could probably create some sort of 'job' daemon that I signal with details and get updates from. It would run the commands and keep track of their status etc. But is there a simpler way? Thanks in advance.

    Read the article

  • Where can I find a quick reference for standard Basic?

    - by Steve314
    The reason? Pure nostalgia. Anyway, there was a standard for Basic that was published in the late 80s or early 90s. It was probably ISO/IEC 10279:1991, but I don't have access to that and cannot be sure. Whatever this standard was, some of the syntax made its way into Borlands Turbo Basic and Microsofts Visual Basic. I never learned any significant amount of VB, but Turbo Basic is one of those things I played with in my mis-spent youth. At one time, my main reference was an article published in one of the main programming periodicals - maybe Personal Computer World, maybe Byte. A scan of that article (if anyone can even identify it) would be great, but all I really want is a few pages quick reference of that standard syntax. Must be free (I'm not that nostalgic), but it must describe the standard syntax - the whole point is to sort out what is standard as opposed to VB or whatever. EDIT The more I think about this, the more convinced I am that this standard was available around 1987 or 1988. Maybe it was the earlier non-full version of the standard above, or maybe it was pre-acceptance of the standard.

    Read the article

  • Can memory be cleaned up?

    - by Tom
    I am working in Delphi 5 (with FastMM installed) on a Win32 project, and have recently been trying to drastically reduce the memory usage in this application. So far, I have cut the usage nearly in half, but noticed something when working on a separate task. When I minimized the application, the memory usage shrunk from 45 megs down to 1 meg, which I attributed to it paging out to disk. When I restored it and restarted working, the memory went up only to 15 megs. As I continued working, the memory usage slowly went up again, and a minimize and restore flushed it back down to 15 megs. So to my thinking, when my code tells the system to release the memory, it is still being held on to according to Windows, and the actual garbage collection doesn't kick in until a lot later. Can anyone confirm/deny this sort of behavior? Is it possible to get the memory cleaned up programatically? If I keep using the program without doing this manual flush, I get an out of memory error after a while, and would like to eliminate that. Thanks.

    Read the article

  • Are there some cases where Python threads can safely manipulate shared state?

    - by erikg
    Some discussion in another question has encouraged me to to better understand cases where locking is required in multithreaded Python programs. Per this article on threading in Python, I have several solid, testable examples of pitfalls that can occur when multiple threads access shared state. The example race condition provided on this page involves races between threads reading and manipulating a shared variable stored in a dictionary. I think the case for a race here is very obvious, and fortunately is eminently testable. However, I have been unable to evoke a race condition with atomic operations such as list appends or variable increments. This test exhaustively attempts to demonstrate such a race: from threading import Thread, Lock import operator def contains_all_ints(l, n): l.sort() for i in xrange(0, n): if l[i] != i: return False return True def test(ntests): results = [] threads = [] def lockless_append(i): results.append(i) for i in xrange(0, ntests): threads.append(Thread(target=lockless_append, args=(i,))) threads[i].start() for i in xrange(0, ntests): threads[i].join() if len(results) != ntests or not contains_all_ints(results, ntests): return False else: return True for i in range(0,100): if test(100000): print "OK", i else: print "appending to a list without locks *is* unsafe" exit() I have run the test above without failure (100x 100k multithreaded appends). Can anyone get it to fail? Is there another class of object which can be made to misbehave via atomic, incremental, modification by threads? Do these implicitly 'atomic' semantics apply to other operations in Python? Is this directly related to the GIL?

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • SeriesInterpolate - removing data at start of array

    - by Allan Jardine
    Hello all, I've been experimenting with Flex Charts (in Flash Builder 4) recently, but have run into one area which didn't quite work as I was expecting. Specifically, when adding and removing a data point from an array, with SeriesInterpolate set. I've put up three examples, as I expect these will make a lot more sense than me trying to explain it with words only!: http://sprymedia.co.uk/media/misc/flex/linechart/LineChart-Add.swf - Clicking the button in the top right adds a new data point to the end of the data array, and the chart is smoothly updated (almost smoothly there is a 1px shift when animating which is odd...) http://sprymedia.co.uk/media/misc/flex/linechart/LineChart-Delete.swf - Clicking the button (which is now labelled incorrectly) will remove the item at the start of the data array - but the chart draws this as if it were removing the end element. http://sprymedia.co.uk/media/misc/flex/linechart/LineChart-AddDelete.swf - This is sort of what I'm eventually aiming for - a smooth side scroll chart, where new data is added at the end and the old data is sifted off the front. However the delete behaviour makes this look a bit odd. Does anyone know if there is a way to get the smooth transition I'm looking with SeriesInterpolate? Or is it possible to implement a custom transition? Many thanks, Allan

    Read the article

  • A simple WCF Service (POX) without complex serialization

    - by jammer59
    I'm a complete WCF novice. I'm trying to build a deploy a very, very simple IIS 7.0 hosted web service. For reasons outside of my control it must be WCF and not ASMX. It's a wrapper service for a pre-existing web application that simply does the following: 1) Receives a POST request with the request body XML-encapsulated form elements. Something like valuevalue. This is untyped XML and the XML is atomic (a form) and not a list of records/objects. 2) Add a couple of tags to the request XML and the invoke another HTTP-based service with a simple POST + bare XML -- this will actually be added by some internal SQL ops but that isn't the issue. 3) Receive the XML response from the 3rd party service and relay it as the response to the original calling client in Step 1. The clients (step 1) will be some sort of web-based scripting but could be anything .aspx, python, php, etc. I can't have SOAP and the usual WCF-based REST examples with their contracts and serialization have me confused. This seems like a very common and very simple problem conceptually. It would be easy to implement in code but IIS-hosted WCF is a requirement. Any pointers?

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Iterating through String word at a time in Python

    - by AlgoMan
    I have a string buffer of a huge text file. I have to search a given words/phrases in the string buffer. Whats the efficient way to do it ? I tried using re module matches. But As i have a huge text corpus that i have to search through. This is taking large amount of time. Given a Dictionary of words and Phrases. I iterate through the each file, read that into string , search all the words and phrases in the dictionary and increment the count in the dictionary if the keys are found. One small optimization that we thought was to sort the dictionary of phrases/words with the max number of words to lowest. And then compare each word start position from the string buffer and compare the list of words. If one phrase is found, we don search for the other phrases (as it matched the longest phrase ,which is what we want) Can some one suggest how to go about word by word in the string buffer. (Iterate string buffer word by word) ? Also, Is there any other optimization that can be done on this ?

    Read the article

  • Finding values from a table that are *not* in a grouping of another table and what group that value

    - by Bkins
    I hope I am not missing something very simple here. I have done a Google search(es) and searched through stackoverflow. Here is the situation: For simplicity's sake let's say I have a table called "PeoplesDocs", in a SQL Server 2008 DB, that holds a bunch of people and all the documents that they own. So one person can have several documents. I also have a table called "RequiredDocs" that simply holds all the documents that a person should have. Here is sort of what it looks like: PeoplesDocs: PersonID DocID -------- ----- 1 A 1 B 1 C 1 D 2 C 2 D 3 A 3 B 3 C RequiredDocs: DocID DocName ----- --------- A DocumentA B DocumentB C DocumentC D DocumentD How do I write a SQL query that returns some variation of: PersonID MissingDocs -------- ----------- 2 DocumentA 2 DocumentB 3 DocumentD I have tried, and most of my searching has pointed to, something like: SELECT DocID FROM DocsRequired WHERE NOT EXIST IN ( SELECT DocID FROM PeoplesDocs) but obviously this will not return anything in this example because everyone has at least one of the documents. Also, if a person does not have any documents then there will be one record in the PeoplesDocs table with the DocID set to NULL. Thank you in advance for any help you can provide, Ben

    Read the article

  • Check mail attachment

    - by comii
    Hi! I am using vb.net to display email from outlook express! Everything work fine but when some message has attachment, i can not display message that email has attachment! This is my code: Private Sub LoginButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles LoginButton.Click Dim oItem Dim i As Integer Dim Message As MAPI.Message Dim items As String() = New String(6) {} ' Items are the sender name,subject and date and read/unread value Dim PrSenderEmail, PrBodyEmail Session = CreateObject("MAPI.Session") ' we use a session object of MAPI Component Session.Logon(ProfileName:=Me.UserId.Text, ProfilePassword:=Me.Password.Text) Session.MAPIOBJECT = Session.MAPIOBJECT ' Folder = CObj(Session.Inbox) ' choose the folder Application = CreateObject("Outlook.Application") Namespace1 = Application.GetNamespace("MAPI") Namespace1.Logon() ' for getting the sender name and avoid security validation of Outlook/Exchange server 2003 ' we are using the "Redemption" component sItem = CreateObject("Redemption.SafeMailItem") Cursor.Current = Cursors.WaitCursor ' show we're busy doing the sort ListInbox.BeginUpdate() ' Notify that update begins ListInbox.Items.Clear() i = 0 ' first email message is 0 For Each Message In Folder.Messages Try i = i + 1 ' increment to the next email message 'get e-mail from the Inbox, can be any other item oItem = Application.Session.GetDefaultFolder(6).Items(i) ' GetDefaultFolder(6) refers to Inbox sItem.Item = oItem 'sItem is an object of Redemption COM and is used to get the senders name items(0) = sItem.SenderName() Catch items(0) = "error" End Try Dim objApp As Outlook.Application = New Outlook.Application 'Get Mapi NameSpace Dim objNS As Outlook.NameSpace = objApp.GetNamespace("MAPI") Dim oMsg As Outlook.MailItem Dim pp As String Dim b As Integer Dim objAttachment As Outlook.Attachment pp = Message.StoreID items(1) = Message.Subject items(2) = Message.TimeReceived items(4) = Message.Subject items(5) = Message.Size Dim objInbox As Outlook.MAPIFolder = objNS.GetDefaultFolder(Outlook.OlDefaultFolders.olFolderInbox) Dim objItems As Outlook.Items = objInbox.Items items(5) = Message.Size.ToString / 1000 & "kb" If Message.Unread = True Then items(3) = "unread" Else items(3) = "read" End If ListInbox.Items.Add(New ListViewItem(items)) Next ListInbox.EndUpdate() ' Notify that update ends Cursor.Current = Cursors.Default End If End Sub How I can display message that email has attachment?

    Read the article

  • List of objects or parallel arrays of properties?

    - by Headcrab
    The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension?

    Read the article

  • Design pattern to use instead of multiple inheritance

    - by mizipzor
    Coming from a C++ background, Im used to multiple inheritance. I like the feeling of a shotgun squarely aimed at my foot. Nowadays, I work more in C# and Java, where you can only inherit one baseclass but implement any number of interfaces (did I get the terminology right?). For example, lets consider two classes that implement a common interface but different (yet required) baseclasses: public class TypeA : CustomButtonUserControl, IMagician { public void DoMagic() { // ... } } public class TypeB : CustomTextUserControl, IMagician { public void DoMagic() { // ... } } Both classes are UserControls so I cant substitute the base class. Both needs to implement the DoMagic function. My problem now is that both implementations of the function are identical. And I hate copy-and-paste code. The (possible) solutions: I naturally want TypeA and TypeB to share a common baseclass, where I can write that identical function definition just once. However, due to having the limit of just one baseclass, I cant find a place along the hierarchy where it fits. One could also try to implement a sort of composite pattern. Putting the DoMagic function in a separate helper class, but the function here needs (and modifies) quite a lot of internal variables/fields. Sending them all as (reference) parameters would just look bad. My gut tells me that the adapter pattern could have a place here, some class to convert between the two when necessery. But it also feels hacky. I tagged this with language-agnostic since it applies to all languages that use this one-baseclass-many-interfaces approach. Also, please point out if I seem to have misunderstood any of the patterns I named. In C++ I would just make a class with the private fields, that function implementation and put it in the inheritance list. Whats the proper approach in C#/Java and the like?

    Read the article

  • Lazy non-modifiable list in Google Collections

    - by mindas
    I was looking for a decent implementation of a generic lazy non-modifiable list implementation to wrap my search result entries. The unmodifiable part of the task is easy as it can be achieved by Collections.unmodifiableList() so I only need to sort out the the lazy part. Surprisingly, google-collections doesn't have anything to offer; while LazyList from Apache Commons Collections does not support generics. I have found an attempt to build something on top of google-collections but it seems to be incomplete (e.g. does not support size()), outdated (does not compile with 1.0 final) and requiring some external classes, but could be used as a good starting point to build my own class. Is anybody aware of any good implementation of a LazyList? If not, which option do you think is better: write my own implementation, based on google-collections ForwardingList, similar to what Peter Maas did; write my own wrapper around Commons Collections LazyList (the wrapper would only add generics so I don't have to cast everywhere but only in the wrapper itself); just write something on top of java.util.AbstractList; Any other suggestions are welcome.

    Read the article

  • Categories of tags

    - by Peter Rowell
    I'm starting a pro bono project that is the web interface to the world's largest collection of lute music and it's a challenging collection from several points of view. The pieces are largely from 1400 to 1600, but they range from the mid-1200's to present day. Needless to say, there is tremendous variability in how the pieces are categorized and who they are attributed to. It is obvious that any sort of rigid, DB-enforced hierarchy isn't going to work with this collection, so my thoughts turn to tags. But not all tags are the same. I'll have tags that represent a person/role (composer, translator, entabulator, etc.), tags that represent the instrument(s) the piece in written for, and tags that represent how the piece has been classified by any one of half a dozen different classification systems used over the centuries. We will be using a semi-controlled tag vocabulary to prevent runaway tag proliferation (e.g. del.icio.us), but I want to treat the tags as belonging to different groups. People tags should not be offered when the editor is doing instrument tagging, etc. Has anyone done something like this? I have several ways I can think of to do it, but if there is an existing system that is well-done it would save me time implementing/debugging. FWIW: This is a Django system and I'm looking at starting with Django-tagging and then hacking from there, possibly adding a category field or ...

    Read the article

  • @font-face fonts only work on their own domain

    - by Ben
    I am trying to create a type of font repository for use on my websites, so that I can call to any font in the repository in my css without any other set-up. To do this I created a subdomain on which I placed folders for each font in the repository that contained the various file types for each font. I also placed a css file called font-face.css on the root of the subdomain and filled it with @font-face declarations for each of the fonts, the fonts a linked with an absolute link so that they can be used from anywhere. My issue is that it seems that I can only use the fonts on that subdomain where they are located, on my other sites the font does not show. Using firebug I determined that the font-face.css file was successfully being linked to and loaded. So why does the font not correctly load? Is there protection on the font files or something? I am using all fonts that I should be allowed to do this with, so I don't see why this is occurring. Maybe it is an apache issue, but I can download the font just fine when I link to it. Oh, and just to clarify, I am not violating any copyrights by setting this up, all the fonts I am using are licensed to allow this sort of thing. I would however like to set up a way that only I can have access to this repository of fonts but that's another project.

    Read the article

  • Java Swing Table questions

    - by Dalton Conley
    Hey guys, working on an event calendar. I'm having some trouble getting my column heads to display.. here is the code private JTable calendarTable; private DefaultTableModel calendarTableModel; final private String [] days = {"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"}; ////////////////////////////////////////////////////////////////////// /* Setup the actual calendar table */ calendarTableModel = new DefaultTableModel() { public boolean isCellEditable(int row, int col){ return false; } }; // setup columns for(int i = 0; i < 7; i++) calendarTableModel.addColumn(days[i]); calendarTable = new JTable(calendarTableModel); calendarTable.getTableHeader().setResizingAllowed(false); calendarTable.getTableHeader().setReorderingAllowed(false); calendarTable.setColumnSelectionAllowed(true); calendarTable.setRowSelectionAllowed(true); calendarTable.setSelectionMode(ListSelectionModel.SINGLE_SELECTION); calendarTable.setRowHeight(105); calendarTableModel.setColumnCount(7); calendarTableModel.setRowCount(6); Also, Im sort of new with tables.. how can I make the rowHeight split between the max size of the table?

    Read the article

  • Daily, Weekly and Monthly Page View Counter

    - by Jens Fahnenbruck
    I'm building a website with user generated content. On the home page I want to show a list of all created items, and I want to be able to sort them by a view counter. That's sound easy, but I want multiple counters. I want to know which was the most visited item in the last day, last week or last months or overall. My first Idea was to create 4 counter columns in the item's DB-Table. One for each of daily, weekly, monthly and overall, and the create a cron job, that clears the daily counter every 24 hours, the weekly counter every 7 days and so on. But my problem with this is, what happens if I want to know which was the most viewed item of the week, just after the weekly counter got cleared? What I need is an efficient way to create a continous counter, which got reduced for every page view that is too old, and increased for every new page view. Right now I'm thinking of a solution with the redis server, but I have no solution yet. I'm just looking for a general idea here, but FYI I'm developing this application in Ruby on Rails.

    Read the article

  • JQuery selector for first hierarchy element?

    - by Sebastian P.R. Gingter
    I have this HTML structure: <div class="start"> <div class="someclass"> <div class="catchme"> <div="nested"> <div class="catchme"> <!-- STOP! no, no catchme's within other catchme's --> </div> </div> </div> <div class="someclass"> <div class="catchme"> </div> </div> </div> <div class="otherclass"> <div class="catchme"> </div> </div> </div> I am looking for a JQuery structure that returns all catchme's within my 'start' container, except all catchme's that are contained in a found catchme. In fact I only want all 'first-level' catchme's regardless how deep they are in the DOM tree. This is something near, but not really fine: var start = $('.start'); // do smething $('.catchme:first, .catchme:first:parent + .catchme', start) I sort of want to break further traversing down the tree behind all found elements. Any ideas?

    Read the article

  • Help with php code - need to add condition to make one link https

    - by Kaskade
    Hi, I have a wordpress blog and I need to make one of the pages secure. I have been told to make the link to that page point to https://claimpage.html as opposed to http://claimpage.html. The problem is I don't actually create the menu that links the user to the individual pages. This is done automatically by the code in the background. I think I need to put in some sort of an IF statement, saying, if the title of the page is "claim now" then use https otherwise use http. I found this code in the header.php so I think my changes need to go in here but I'm not really sure what to do. <div id="navbar"> <ul class="menu"> <li class="<?php if ( is_home() ) { ?>current_page_item<?php } else { ?>page_item<?php } ?>"><a href="<?php echo get_settings('home'); ?>"><?php _e('Home'); ?></a></li> <?php wp_list_pages('sort_column=id&depth=1&title_li='); ?> <?php wp_register('<li>','</li>'); ?> </ul> </div> <!-- end of #navbar --> Any suggestions as to how I can make one page that I know the title and url or https while the others are kept using normal http? The site is hosted on a secure server so I do have an ssl certificate.

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >