Search Results

Search found 7822 results on 313 pages for 'jason short'.

Page 97/313 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Tab Sweep - Devoxx questions, GlassFish Rest, APAC Java, Lift, JEPs, tools, ...

    - by alexismp
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • Submit a question for Devoxx 2011 Keynotes (Moderator questions) • Devoxx for Java developers (The Java blog) • GlassFish REST Client: ComplexExample.java (Jason) • Oracle Technology Network site for Asia-Pacific developers (OTN APAC) • Notes on deploying Lift apps to GlassFish (Antonio) • Using JSR-250's @PostConstruct Annotation to Replace Spring's InitializingBean (DZone) • The future is in the JEPs (Stephen) • Comparison of Eclipse 3.6 and IntelliJ IDEA 10.5: Pros and Cons (Dzone)

    Read the article

  • Android - Efficient way to draw tiles in OpenGL ES

    - by Maecky
    Hi, I am trying to write efficient code to render a tile based map in android. I load for each tile the corresponding bitmap (just one time) and then create the according tiles. I have designed a class to do this: public class VertexQuad { private float[] mCoordArr; private float[] mColArr; private float[] mTexCoordArr; private int mTextureName; private static short mCounter = 0; private short mIndex; As you can see, each tile has it's x,y location, a color array, texture coordinates and a texture name. Now, I want to render all my created tiles. To reduce the openGL api calls (I read somewhere that the state changes are costly and therefore I want to keep them to a minimum), I first want to hand ALL the coordinate-arrays, color-arrays and texture-coordinates over to OpenGL. After that I run two for loops. The first one iterates over the textures and binds the texture. The second for loop iterates over all Tiles and puts all tiles with the corresponding texture into an IndexBuffer. After the second for loop has finished, I call gl.gl_drawElements() whith the corresponding index buffer, to draw all tiles with the texture associated. For the next texture I do the same again. Now I run into some problems: Allocating and filling the FloatBuffers at the start of each rendering cycle costs very much time. I just run a test, where i wanted to put 400 coordinates into a FloatBuffer which took me about 200ms. My questions now are: Is there a better way, handling the coordinate and color structures? How is this correctly done, this is obviously not the optimal way? ;) thanks in advance, regards Markus

    Read the article

  • OpenGL - Rendering from part of an index and vertex array depending on an element count

    - by user1423893
    I'm currently drawing my shapes as lines by using a VAO and then assigning the dynamic vertices and indices each frame. // Bind VAO glBindVertexArray(m_vao); // Update the vertex buffer with the new data (Copy data into the vertex buffer object) glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); // Update the index buffer with the new data (Copy data into the index buffer object) glBufferData(GL_ELEMENT_ARRAY_BUFFER, numIndices * sizeof(unsigned short), indices.data(), GL_DYNAMIC_DRAW); glDrawElements(GL_LINES, numIndices, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0)); // Unbind VAO glBindVertexArray(0); What I would like to do is draw the lines using only part of the data stored in the index and vertex buffer objects. The vertex buffer has its vertices set from an array of defined maximum size: std::array<VertexPosition, maxVertices> m_vertices; The index buffer has its elements set from an array of defined maximum size: std::array<unsigned short, maxIndices> indices = { 0 }; A running total is kept of the number of vertices and indices needed for each draw call numVertices numIndices Can I not specify that the buffer data contain the entire array and only read from part of it when drawing? For example using the vertex buffer object glBufferData(GL_ARRAY_BUFFER, numVertices * sizeof(VertexPosition), m_vertices.data(), GL_DYNAMIC_DRAW); m_vertices.data() = Entire array is stored numVertices * sizeof(VertexPosition) = Amount of data to read from the entire array Is this not the correct way to approach this? I do not wish to use std::vector if possible.

    Read the article

  • What's hot in Oracle Premier Support News for Solaris, Storage and Systems - How to Patch!

    - by user12244613
    Struggling with locating patches for Sun products? Can't find your Oracle System Drivers? This question has been raised many times by customers and was the source of a short video in the Oracle System Support Newsletter in February 2012. The transition between SunSolve and My Oracle Support is to change how you think about the type of patch your looking for. For example, in SunSolve you might have typed e1000g if looking for an Enternet Driver.. but entering e1000g will not find anything in My Oracle Support - Patches and Update Menu. As you need to use the Product (Advanced) search which is driven of the Product Name, therefore you need to type "Ethernet" and select the ethernet product you are looking for to locate the patches for this product. Just to recap that video: If you are looking for the e1000g Ethernet Driver - You need to use Advance Search and search for Enternet 1. Log into My Oracle Support - Select Patches and Updates - Select Product or Family (Advanced Search). 2. In the product line enter: Ethernet and select the product name from the menu. 3. Check remove supersede patches - that ensure you only get relevant current patches in the results. 4. Select Search and the results are displayed. Now you have more options to include the platform (Solaris,Linux etc.) if want to further narrow the search. Need more information? Log into My Oracle Support and what a short 90sec video I put together. View the 7 minute Video using Firefox/chrome – It shows searching for individual patches, Solaris, Firmware etc. If you are not receiving the Oracle System Support Newsletter: Option (a) Within My Oracle Support, make document id: 1363390.1 a favourite and revisit it on the 2nd of each month for the latest content. Option (b) By default the Newsletter  is sent to all customers who have logged a Service Request on an Oracle Systems Hardware Product during the last 12months, unless you have opted out to receiving Oracle Communications on your profile on http://oracle.com

    Read the article

  • Working as a contractor--questions part II

    - by universe_hacker
    This is a follow-up to this discussion: Working as a contractor--questions I still would like to get more info on the following points, when working as a contractor as opposed to direct employee: Housing: for short-term contracts (let's say 6 months or less) that are not in your home area, where and how do you search for short-term, flexible housing? Especially since an employer would typically want you to start immediately, so you don't have the time to go out and explore. Also, would you typically look for furnished apartments because the cost transporting your own furniture for a few months is not justified? Work hours and pay: are contractors more strictly supervised (as far as getting specified work done) because they get paid by the hour? There are also supposed to get overtime pay (at a higher rate) if they work more than 40 hours per week, does this really happen? Or do they work unpaid extra hours just like many regular employees? Some potential employers have mentioned paying the "per diem", which is essentially a non-taxable daily allowance, which is supposed to be used for living expenses. This money gets subtracted from you per-hour rate, but the advantage is that you pay less tax. However, from the information I have seen, the per diem can only be paid if you maintain a "permanent" residence you intend to return to. Is this checked in practice, and if yes, how?

    Read the article

  • Getting file system free space

    - by Fred Riley
    This isn't a problem as such, more a request for information based on ignorance of the Linux filesystem. The very short question is: How do I find out how much free and used space there is on the volume from which Ubuntu is running? More detail: I'm running Ubuntu 12.04 from a 64Gb USB3 stick, created from booting up a year-old Ubuntu 12.04 DVD and running Startup Disk Creator. The reason for this is that the Master Boot Record on my hard disk, holding Windoze 7, has gone belly-up, and whilst awaiting a recovery disk I'm running Ubunto off USB or DVD as a 'trial'. (And will continue to run Ubuntu after restoring Windoze, as I've rediscovered my love of the penguin :o)) After installing Ubuntu on the stick I ran the software update app, which downloaded some 450Mb of updates and took a couple of hours to install to the stick. A couple of times I got a message saying that disk space was short. So I looked in the file manager (or whatever it's called these days) and couldn't see the stick listed, just: SYSTEM hard disk (listed as 479Gb Filesystem) two other partitions that had been created by Windoze "4.3GB Filesystem" which when I try to open gives the error "Could not find /cow", and when I try to unmount it tells me I can't because it's not mounted - D'OH!! Edit: screenshot of file manager Edit: screenshot of low disk space warning What I can't see is the USB stick from which I'm running Ubuntu. Where's it gone, anybody know? This is tangentially related to a previous question of mine about system tools, in that I'm trying to get control and knowledge of the system in the newest incarnation of Ubuntu.

    Read the article

  • Making The EBS Upgrade From 11.5.10 Easier - Part II

    - by Annemarie Provisero
    ADVISOR WEBCAST: Making The EBS Upgrade From 11.5.10 Easier - Part II PRODUCT FAMILY: E-Business Suite July 12, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session is recommended for technical users who are responsible for upgrading their E-Business Suite applications from Release 11.5.10 to Release 12.1.x. As you begin your upgrade process, there are a number of tools available to assist you in a successful upgrade. A successful upgrade requires careful planning, correct upgrade processing, detailed testing, and user (re)training prior to upgrade. Over three sessions we will discuss the tools that you can use to assist in your upgrade tasks. These tools are available to you via My Oracle Support and as part of the E-Business Suite product offerings. In In this second session, we’ll cover the following topics: Recap of Part I Detailed Look at Maintenance Wizard Detailed Look at Patch Wizard A short, live demonstration and question and answer period will be included. In the first part of the three-session series, we covered the following topics: Overview of Tools Available for Upgrading Upgrade versus Re-implementing Upgrade Community Upgrade Product Information Center Page Detailed Look at Upgrade Advisor A replay of that session is available via Note 740964.1,  Advisor Webcast Archive. A third session will be presented on July 19, 2011 to review best practices for using the upgrade tools. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Visual Studio 2012 : quoi de neuf dans la RC ? Tour d'horizon des nouveautés de l'EDI, .NET 4.5 et Team Foundation Server 2012

    Visual Studio 2012 : quoi de neuf dans la Release Candidate ? Tour d'horizon des nouveautés de l'EDI, .NET 4.5 et Team Foundation Server 2012 Visual Studio 2012, Team Foundation Server 2012 et .NET Framework 4.5 sont disponibles en version Release Candidate. Cette sortie marque une étape importante pour ces outils qui peuvent désormais être utilisés pour développer des applications pouvant être exécutées en environnements de production. Qu'est-ce qui a changé dans ces moutures par rapport aux bêtas. Dans cet article, nous ferons un tour d'horizon des nouveautés les plus importantes de Visual Studio 2012, Team Foundation Server 2012 et .NET 4.5 RC présentées par Jason Zander, responsable ...

    Read the article

  • Content light website and Google - Tell google it's a listings site (as opposed shop, reviews or restaurants)

    - by Doug Firr
    I have a listings style website. Due to the nature of this (listings) the site is content light. Each page is typically less that 50 words but there are many pages. The site in question has had a ton of media coverage and so has some great inbound links from places like Wired, Fast Company, Canada Broadcasting Corporation and many many other bloggers, media websites and recycle related niche authors (It's a recycling site). But Google really ignores it. Traffic from search is very very low - less than 5% of all traffic. I know that using markup you can tell Google whether your site is a restaurant, article, review, shop, local business and a few other categories (https://www.google.com/webmasters/markup-helper/u/0/). Is there a way to tell Google that my site is a listings site? I suspect, but do not know for sure, that part of the problem is that Google simply does not know what my site is? It's a crowdmap where people post curbalerts. The information is useful to people but it is presented in a short, concise way - a pin on a map, a picture and a short description. Adding anything further is not necessary for the site's intended purpose. 1st question - how best to tell the search engines what y site is - listings and not some spammy website? Any recommendations in improving our site's Search presence? You can take a look here if interested: http://tinyurl.com/lxg4hn7

    Read the article

  • Autocorrelation returns random results with mic input (using a high pass filter)

    - by Niall
    Hello, Sorry to ask a similar question to the one i asked before (FFT Problem (Returns random results)), but i've looked up pitch detection and autocorrelation and have found some code for pitch detection using autocorrelation. Im trying to do pitch detection of a users singing. Problem is, it keeps returning random results. I've got some code from http://code.google.com/p/yaalp/ which i've converted to C++ and modified (below). My sample rate is 2048, and data size is 1024. I'm detecting pitch of both a sine wave and mic input. The frequency of the sine wave is 726.0, and its detecting it to be 722.950820 (which im ok with), but its detecting the pitch of the mic as a random number from around 100 to around 1050. I'm now using a High pass filter to remove the DC offset, but it's not working. Am i doing it right, and if so, what else can i do to fix it? Any help would be greatly appreciated! double* doHighPassFilter(short *buffer) { // Do FFT: int bufferLength = 1024; float *real = malloc(bufferLength*sizeof(float)); float *real2 = malloc(bufferLength*sizeof(float)); for(int x=0;x<bufferLength;x++) { real[x] = buffer[x]; } fft(real, bufferLength); for(int x=0;x<bufferLength;x+=2) { real2[x] = real[x]; } for (int i=0; i < 30; i++) //Set freqs lower than 30hz to zero to attenuate the low frequencies real2[i] = 0; // Do inverse FFT: inversefft(real2,bufferLength); double* real3 = (double*)real2; return real3; } double DetectPitch(short* data) { int sampleRate = 2048; //Create sine wave double *buffer = malloc(1024*sizeof(short)); double amplitude = 0.25 * 32768; //0.25 * max length of short double frequency = 726.0; for (int n = 0; n < 1024; n++) { buffer[n] = (short)(amplitude * sin((2 * 3.14159265 * n * frequency) / sampleRate)); } doHighPassFilter(data); printf("Pitch from sine wave: %f\n",detectPitchCalculation(buffer, 50.0, 1000.0, 1, 1)); printf("Pitch from mic: %f\n",detectPitchCalculation(data, 50.0, 1000.0, 1, 1)); return 0; } // These work by shifting the signal until it seems to correlate with itself. // In other words if the signal looks very similar to (signal shifted 200 data) than the fundamental period is probably 200 data // Note that the algorithm only works well when there's only one prominent fundamental. // This could be optimized by looking at the rate of change to determine a maximum without testing all periods. double detectPitchCalculation(double* data, double minHz, double maxHz, int nCandidates, int nResolution) { //-------------------------1-------------------------// // note that higher frequency means lower period int nLowPeriodInSamples = hzToPeriodInSamples(maxHz, 2048); int nHiPeriodInSamples = hzToPeriodInSamples(minHz, 2048); if (nHiPeriodInSamples <= nLowPeriodInSamples) printf("Bad range for pitch detection."); if (1024 < nHiPeriodInSamples) printf("Not enough data."); double *results = new double[nHiPeriodInSamples - nLowPeriodInSamples]; //-------------------------2-------------------------// for (int period = nLowPeriodInSamples; period < nHiPeriodInSamples; period += nResolution) { double sum = 0; // for each sample, find correlation. (If they are far apart, small) for (int i = 0; i < 1024 - period; i++) sum += data[i] * data[i + period]; double mean = sum / 1024.0; results[period - nLowPeriodInSamples] = mean; } //-------------------------3-------------------------// // find the best indices int *bestIndices = findBestCandidates(nCandidates, results, nHiPeriodInSamples - nLowPeriodInSamples - 1); //note findBestCandidates modifies parameter // convert back to Hz double *res = new double[nCandidates]; for (int i=0; i < nCandidates;i++) res[i] = periodInSamplesToHz(bestIndices[i]+nLowPeriodInSamples, 2048); double pitch2 = res[0]; free(res); free(results); return pitch2; } /// Finds n "best" values from an array. Returns the indices of the best parts. /// (One way to do this would be to sort the array, but that could take too long. /// Warning: Changes the contents of the array!!! Do not use result array afterwards. int* findBestCandidates(int n, double* inputs,int length) { //int length = inputs.Length; if (length < n) printf("Length of inputs is not long enough."); int *res = new int[n]; double minValue = 0; for (int c = 0; c < n; c++) { // find the highest. double fBestValue = minValue; int nBestIndex = -1; for (int i = 0; i < length; i++) { if (inputs[i] > fBestValue) { nBestIndex = i; fBestValue = inputs[i]; } } // record this highest value res[c] = nBestIndex; // now blank out that index. if(nBestIndex!=-1) inputs[nBestIndex] = minValue; } return res; } int hzToPeriodInSamples(double hz, int sampleRate) { return (int)(1 / (hz / (double)sampleRate)); } double periodInSamplesToHz(int period, int sampleRate) { return 1 / (period / (double)sampleRate); } Thanks, Niall. Edit: Changed the code to implement a high pass filter with a cutoff of 30hz (from What Are High-Pass and Low-Pass Filters?, can anyone tell me how to convert the low-pass filter using convolution to a high-pass one?) but it's still returning random results. Plugging it into a VST host and using VST plugins to compare spectrums isn't an option to me unfortunately.

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • Reflections on GiveCamp

    - by Reed
    I participated in the Seattle GiveCamp over the weekend, and am entirely impressed.  GiveCamp is a great event – I especially like how rewarding it is for everybody involved.  I strongly encourage any and all developers to watch for future GiveCamp events, and consider participating, for many reasons… GiveCamp provides real value to organizations that truly need help.  The Seattle event alone succeeded in helping sixteen non-profit organizations in many different ways.  The projects involved varied dramatically, including website redesigns, SEO, reworking data management workflows, and even game development.  Many non-profits have a strong need for good, quality technical help.  However, nearly every non-profit organization has an incredibly limited budget.  GiveCamp is a way to really give back, and provide incredibly valuable help to organizations that truly benefit. My experience has shown many developers to be incredibly generous – this is a chance to dedicate your energy to helping others in a way that really takes advantage of your expertise.  Your time as a developer is incredibly valuable, and this puts something of incredible value directly into the hands of places its needed. First, and foremost, GiveCamp is about providing technical help to non-profit organizations in need. GiveCamp can make you a better developer.  This is a fantastic opportunity for us, as developers, to work with new people, in a new setting.  The incredibly short time frame (one weekend for a deliverable project) and intense motivation to succeed provides a huge opportunity for learning from peers.  I’d personally like to thank off the developers with whom I worked – I learned something from each and every one of you.  I hope to see and work with all of you again someday. GiveCamp provides an opportunity for you to work outside of your comfort zone. While it’s always nice to be an expert, it’s also valuable to work on a project where you have little or no direct experience.  My team focused on a complete reworking of our organizations message and a complete new website redesign and deployment using WordPress.  While I’d used WordPress for my blog, and had some experience, this is completely unrelated to my professional work.  In fact, nobody on our team normally worked directly with the technologies involved – yet together we managed to succeed in delivering our goals.  As developers, it’s easy to want to stay abreast of new technology surrounding our expertise, but its rare that we get a chance to sit down and work on something practical that is completely outside of our normal realm of work.  I’m a desktop developer by trade, and spent much of the weekend working with CSS and Photoshop.  Many of the projects organizations need don’t match perfectly with the skill set in the room – yet all of the software professionals rose to the occasion and delivered practical, usable applications. GiveCamp is a short term, known commitment. While this seems obvious, I think it’s an important aspect to remember.  This is a huge part of what makes it successful – you can work, completely focused, on a project, then walk away completely when you’re done.  There is no expectation of continued involvement.  While many of the professionals I’ve talked to are willing to contribute some amount of their time beyond the camp, this is not expected. The freedom this provides is immense.  In addition, the motivation this brings is incredibly valuable.  Every developer in the room was very focused on delivering in time – you have one shot to get it as good as possible, and leave it with the organization in a way that can be maintained by them.  This is a rare experience – and excellent practice at time management for everyone involved. GiveCamp provides a great way to meet and network with your peers. Not only do you get to network with other software professionals in your area – you get to network with amazing people.  Every single person in the room is there to try to help people.  The balance of altruism, intelligence, and expertise in the room is something I’ve never before experienced. During the presentations of what was accomplished, I felt blessed to participate.  I know many people in the room were incredibly touched by the level of dedication and accomplishment over the weekend. GiveCamp is fun. At the end of the experience, I would have signed up again, even if it was a painful, tedious weekend – merely due to the amazing accomplishments achieved throughout the event.  However, the event is fun.  Everybody I talked to, the entire weekend, was having a good time.  While there were many faces focused into a near grimace at times (including mine, I’ll admit), this was always in response to a particularly challenging problem or task.  The challenges just added to the overall enjoyment of the weekend – part of why I became a developer in the first place is my love for challenge and puzzles, and a short deadline using unfamiliar technology provided plenty of opportunity for puzzles.  As soon as people would stand up, it was another smile.   If you’re a developer, I’d recommend looking at GiveCamp more closely.  Watch for an event in your area.  If there isn’t one, consider building a team and organizing an event.  The experience is worth the commitment. 

    Read the article

  • Atheros Wireless card shows up as two different models?

    - by geermc4
    Hi I've been fighting these wireless drivers for a few days and just recently i noticed that the model the Wireless controller appears in lspci is different sometimes. This is the data i have after installing Ubuntu Server 64 bit ~# lspci -k .... 04:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: AzureWave Device 1d89 Kernel driver in use: ath9k Kernel modules: ath9k ran some updates, restarted, all was good, all though it did say that linux-headers-server linux-image-server linux-server where beeing kept back. After that i installed ubuntu-desktop (aptitude install ubuntu-desktop --without-recommends) restarted and not only is the wireless not working anymore, but the hardware is listed as a different card ~# lspci -k .... 04:00.0 Ethernet controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) has no available drivers for it, still i tried to modprobe ath9k, they show up in lsmod as loaded, but still iw list shows nothing. this is what it looked like before the ubuntu-desktop instalation Wiphy phy0 Band 1: Capabilities: 0x11ce HT20/HT40 SM Power Save disabled RX HT40 SGI TX STBC RX STBC 1-stream Max AMSDU length: 3839 bytes DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 8 usec (0x06) HT TX/RX MCS rate indexes supported: 0-7 Frequencies: * 2412 MHz [1] (14.0 dBm) * 2417 MHz [2] (15.0 dBm) * 2422 MHz [3] (15.0 dBm) * 2427 MHz [4] (15.0 dBm) * 2432 MHz [5] (15.0 dBm) * 2437 MHz [6] (15.0 dBm) * 2442 MHz [7] (15.0 dBm) * 2447 MHz [8] (15.0 dBm) * 2452 MHz [9] (15.0 dBm) * 2457 MHz [10] (15.0 dBm) * 2462 MHz [11] (15.0 dBm) * 2467 MHz [12] (15.0 dBm) (passive scanning) * 2472 MHz [13] (14.0 dBm) (passive scanning) * 2484 MHz [14] (17.0 dBm) (passive scanning) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) * CMAC (00-0f-ac:6) Available Antennas: TX 0x1 RX 0x3 Configured Antennas: TX 0x1 RX 0x3 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point * P2P-client * P2P-GO software interface modes (can always be added): * AP/VLAN * monitor interface combinations are not supported Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * remain_on_channel * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * connect * disconnect Supported TX frame types: * IBSS: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * managed: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP/VLAN: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * mesh point: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-client: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-GO: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 Supported RX frame types: * IBSS: 0x00d0 * managed: 0x0040 0x00d0 * AP: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * AP/VLAN: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * mesh point: 0x00b0 0x00c0 0x00d0 * P2P-client: 0x0040 0x00d0 * P2P-GO: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 Device supports RSN-IBSS. What's with the hardware change? If it has 2, how can i make the AR9285 always load and disable AR5008, or, is it the same and it's just showing it different? :| Oh and I've tried this on Ubuntu 10.04 server, xubuntu 12.04, ubuntu 12.04 desktop and server. Thanks in advanced. -- Here's some more info, i have it setup in 2 hard drives, 1 works and the other one i'm using to figure it out The one that works... # lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: 54:04:a6:a3:3b:96 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.2.147 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:43 ioport:e000(size=256) memory:d0004000-d0004fff memory:d0000000-d0003fff *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 01 serial: 74:2f:68:4a:26:73 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-18-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:fea00000-fea0ffff Here's where it doesn't # lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: 54:04:a6:a3:3b:96 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.2.160 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:43 ioport:e000(size=256) memory:d0004000-d0004fff memory:d0000000-d0003fff *-network UNCLAIMED description: Ethernet controller product: AR5008 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:fea00000-fea0ffff Update I've noticed that if i blacklist the ath9k and ath9k_common modules lspci gives me the AR9285, but then I need to modprobe ath9k for it to work, does this make any sense? If so, why?

    Read the article

  • Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket - (TOTD #185)

    - by arungupta
    The WebSocket API defines different send(xxx) methods that can be used to send text and binary data. This Tip Of The Day (TOTD) will show how to send and receive text and binary data using WebSocket. TOTD #183 explains how to get started with a WebSocket endpoint using GlassFish 4. A simple endpoint from that blog looks like: @WebSocketEndpoint("/endpoint") public class MyEndpoint { public void receiveTextMessage(String message) { . . . } } A message with the first parameter of the type String is invoked when a text payload is received. The payload of the incoming WebSocket frame is mapped to this first parameter. An optional second parameter, Session, can be specified to map to the "other end" of this conversation. For example: public void receiveTextMessage(String message, Session session) {     . . . } The return type is void and that means no response is returned to the client that invoked this endpoint. A response may be returned to the client in two different ways. First, set the return type to the expected type, such as: public String receiveTextMessage(String message) { String response = . . . . . . return response; } In this case a text payload is returned back to the invoking endpoint. The second way to send a response back is to use the mapped session to send response using one of the sendXXX methods in Session, when and if needed. public void receiveTextMessage(String message, Session session) {     . . .     RemoteEndpoint remote = session.getRemote();     remote.sendString(...);     . . .     remote.sendString(...);    . . .    remote.sendString(...); } This shows how duplex and asynchronous communication between the two endpoints can be achieved. This can be used to define different message exchange patterns between the client and server. The WebSocket client can send the message as: websocket.send(myTextField.value); where myTextField is a text field in the web page. Binary payload in the incoming WebSocket frame can be received if ByteBuffer is used as the first parameter of the method signature. The endpoint method signature in that case would look like: public void receiveBinaryMessage(ByteBuffer message) {     . . . } From the client side, the binary data can be sent using Blob, ArrayBuffer, and ArrayBufferView. Blob is a just raw data and the actual interpretation is left to the application. ArrayBuffer and ArrayBufferView are defined in the TypedArray specification and are designed to send binary data using WebSocket. In short, ArrayBuffer is a fixed-length binary buffer with no format and no mechanism for accessing its contents. These buffers are manipulated using one of the views defined by one of the subclasses of ArrayBufferView listed below: Int8Array (signed 8-bit integer or char) Uint8Array (unsigned 8-bit integer or unsigned char) Int16Array (signed 16-bit integer or short) Uint16Array (unsigned 16-bit integer or unsigned short) Int32Array (signed 32-bit integer or int) Uint32Array (unsigned 16-bit integer or unsigned int) Float32Array (signed 32-bit float or float) Float64Array (signed 64-bit float or double) WebSocket can send binary data using ArrayBuffer with a view defined by a subclass of ArrayBufferView or a subclass of ArrayBufferView itself. The WebSocket client can send the message using Blob as: blob = new Blob([myField2.value]);websocket.send(blob); where myField2 is a text field in the web page. The WebSocket client can send the message using ArrayBuffer as: var buffer = new ArrayBuffer(10);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = i;}websocket.send(buffer); A concrete implementation of receiving the binary message may look like: @WebSocketMessagepublic void echoBinary(ByteBuffer data, Session session) throws IOException {    System.out.println("echoBinary: " + data);    for (byte b : data.array()) {        System.out.print(b);    }    session.getRemote().sendBytes(data);} This method is just printing the binary data for verification but you may actually be storing it in a database or converting to an image or something more meaningful. Be aware of TYRUS-51 if you are trying to send binary data from server to client using method return type. Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) TOTD #183 - Getting Started with WebSocket in GlassFish TOTD #184 - Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Custom payloads using encoder/decoder Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Math for core animation?

    - by jasonbogd
    What is a good level of math required for, like, advanced core animation? Take this for example: http://cocoadex.com/2008/01/lemur-math.html And what's a good book/resource to learn it? -Jason

    Read the article

  • Sales Career in Cloud Computing

    - by ricky
    I am working with a Google's business partner and selling Google Apps which is based on cloud computing concept. As we all know cloud computing is ready to capture the IT world, So I just wanted to take suggestion from you experts here about the sales career in Cloud computing I am a Post graduate in Sales and Marketing and planning to dig deeper into Cloud computing from sales point of view. I would appreciate if you can assist me with my path creation to achieve good career in cloud computing. Regards, Jason Robb

    Read the article

  • Understanding Rails core source code?

    - by jasonbogd
    Hi, I would like to start making code patches to Rails. Are there any good books on 'advanced' Ruby that I should read to understand the rails source code? Are there any other tips on getting started? Rails seems like a large beast and I don't know where to start! Thanks, Jason.

    Read the article

  • Sync a domain with an SVN server?

    - by JasonS
    Hi, I have a project hosted on unfuddle that I would like to sync to a subdomain on one of my hosts. Does anyone know how to sync the server with the repo on unfuddle? Can this be done with php? I just looking for a point in the right direction. What should I be investigating to do this? Has anyone had any experience with any scripts that can do this? Thanks, Jason

    Read the article

  • Delphi On Windows

    - by j-t-s
    Hi All I have decided (whether it's for better or for worse), to start learning Delphi. But, Is it available in Visual Studio? Or is there an IDE for it? I googled Delphi, but came up with some really weird sites. Thanks Jason

    Read the article

  • how to customize django admin for clickable list_editable

    - by FurtiveFelon
    Hi all, Currently, i have a class MyAdmin(admin.ModelAdmin), and i have a field in there called name, which belongs to both list_editable and list_display. The current behavior is such that all fields that is in list_editable displays a form field. However, i would like to change that only when people click on the field would it turn into a editable form field. Can anyone point me in the right direction on how to do that (which template to edit etc.). Thank you very much! Jason

    Read the article

  • how to display fixed dropdown in django admin?

    - by FurtiveFelon
    Hi all, I would like to display priority information in a drop down. Currently i am using a integer field to store the priority, but i would like to display high/medium/low instead of letting user type in a priority. A way to approximate this is to use a Priority database which stores 3 elements, 1:high, 2:medium, 3:low, but it seems like an overkill. Any easier way would be much appreciated! Jason

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >