Search Results

Search found 11338 results on 454 pages for 'big dave'.

Page 24/454 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Baby's first big CPP project - please help [closed]

    - by jamesson
    So, I need to do a few things to the code here; http://64.90.55.88/SynapseSource-Win.zip What I need to do 1)remove the opengl output window 2)Make it work with latest openni (http://75.98.78.94/default.aspx) - as far as I could tell it only works with a specific older version 3)Improve ram and cpu usage (assuming I don't get what I need from the first two changes) 4)Eventually rebuild it as an object in this sdk; http://cycling74.com/products/sdk/ I tried going through the documentation, but "the needle could not penetrate", alas. For example, I am still unclear about some basic things; a) what are the relative responsibilities of nite vs openni? b) which xml files are always used when the app launches, and what is their format? Many thanks in advance Joe Stavitsky

    Read the article

  • I deleted all files and folders (including hidden) from /home/username/ now in big trouble

    - by jeffery_the_wind
    I am logged into a remote ubuntu server, and I accidentally erased the entire /home/username/ directory for the current user. The only thing left is a hidden directory called .gvfs. I don't need anything of the Documents/Music/etc. Now it is not letting me cd into the /var/www/ directory, which has permissions 666 and it is owned by the current user. I am afraid to disconnect from my ssh session because I don't know if I will be able to get back on. Have I permanently created a problem? Is there a way I can replace the most important files to the /home/username/ directory? Thanks! ** EDIT ** Thanks everyone for the help. I figured the problem with cd into the /var/www/ was actually my permissions in the /var/www/ directory. It was set to 666, changed it to 755 and everything was good again. It doesn't look like anything systematic was ruined by deleting the contents of the user folder.

    Read the article

  • Desktop is too big for my screen

    - by user2829148
    I have an acer widescreen monitor set at 1024x768 running Ubuntu 12.04.4 precise, I have adjusted the resolution to match in display settings however the ubuntu side bar and files are all enlarged. its like it has zoomed in. I have read through all other related posts and cannot come up with a fix. The desktop was fine up until 3 days ago then it changed and I have no idea why, can someone help? Thanks

    Read the article

  • Problem with currency formats and big numbers [on hold]

    - by user132750
    I am working on a dollars to euros/euros to dollars converter in C#. I got the formula, $ times 0.73361 = euro, and I have checked Google with the answers. They were right, (1 dollar equals 0.73 euros). However, it stops working properly when the dollar input value is higher than $1363. This is what I get with $1364: $1364 = 1 000,64 €. I don't know what to do, will someone please help me? Thanks. decimal toEuro; Val.doy = "$" + decimal.Parse(richTextBox1.Text); //Ignore this, it's for the output form CultureInfo eu = new CultureInfo("fr-FR"); toEuro = decimal.Parse(richTextBox1.Text.Trim()); toEuro = toEuro * 0.73361m; richTextBox1.Clear(); Val.duh = toEuro.ToString("C2", eu);

    Read the article

  • Tuning Distributed Applications to Access Big Data

    Distributed applications are just that: distributed across one or more hardware platforms across the enterprise. The database administrator (DBA) has the unenviable task of monitoring these environments and configuring and tuning the database server to meet multiple needs. As multiple distributed applications now require access to a very large data store, what tuning options are available to help? Get your SQL Server database under version control now!Version control is standard for applications, but databases haven’t caught up. So how can you bring database development up to speed? Why should you start? Find out…

    Read the article

  • Join multiple filesystems (on multiple computers) into one big volume

    - by jm666
    Scenario: Have 10 computers, each have 12x2TB HDDs (currently) in raidZ2 (10+2) configuration, so, in the each computer i have one approx. 20TB volume. Now, need those 10 separate computers (separate raid groups) join into one big volume. What is the recommended solution? I'm thinking about the FCoE (10GB ethernet). So, buying into each computer FCoE (10GB ethernet card) and - what need more on the hardware side? (probably another computer, FCoE switch? like Cisco Nexus?) The main question is: what need to install and configure on each computer? Currently they have freebsd/raidz2, but it is possible change it into Linux/Solaris if needed. Any helpful resource what talking about how to build a big volumes from smaller raid-groups (on the software side) is very welcomed. So, what OS, what filesystem, what software - etc. In short: want get one approx. 200TB storage (in one filesystem) from already existing computers/storage. Don't need fast writes, but need good performance on reading data. (as a big fileserver), what will works transparently, so when storing data don't want care about onto what computer the data goes. (e.g. not 10 mountpoints - but one big logical filesystem). Thanks.

    Read the article

  • Does my AMD-based machine use little endian or big endian?

    - by Frank
    I'm going though a computers system course and I'm trying to establish, for sure, if my AMD based computer is a little endian machine? I believe it is because it would be Intel-compatible. Specifically, my processor is an AMD 64 Athlon x2. I understand that this can matter in C programming. I'm writing C programs and a method I'm using would be affected by this. I'm trying to figure out if I'd get the same results if I ran the program on an Intel based machine (assuming that is little endian machine). Finally, let me ask this: Would any and all machines capable of running Windows (XP, Vista, 2000, Server 2003, etc) and, say, Ubuntu Linux desktop be little endian? Thank You, Frank

    Read the article

  • Help with algorithmic complexity in custom merge sort implementation

    - by bitcycle
    I've got an implementation of the merge sort in C++ using a custom doubly linked list. I'm coming up with a big O complexity of n^2, based on the merge_sort() slice operation. But, from what I've read, this algorithm should be n*log(n), where the log has a base of two. Can someone help me determine if I'm just determining the complexity incorrectly, or if the implementation can/should be improved to achieve n*log(n) complexity? If you would like some background on my goals for this project, see my blog. I've added comments in the code outlining what I understand the complexity of each method to be. Clarification - I'm focusing on the C++ implementation with this question. I've got another implementation written in Python, but that was something that was added in addition to my original goal(s).

    Read the article

  • How does Google store search trends in backend?

    - by Achshar
    Google trends shows what query has been searched how many times and some other properties of the said query. But how is this data stored in a database? Storing a new row for every search does not seem right. They also tell the query on a time graph, so they must have some way to look for individual searches made by users, but the number of queries they get every day, it does not feel right that they would store every search in a database row along with a time-stamp. This does not apply to just Google trends or Google in general but any other big site that gets awful number of queries and then has tools to see them in depth. I am not an expert on this but I am interested to know some high level structure of how things work behind the scenes.

    Read the article

  • Was a Big Fish in a Little Pond, Am Now a Little Fish in a Big Pond. How Do I Grow? [closed]

    - by Ziv
    I've finished high school where I was in the top three in my class, I studied a little and there too I was pretty much Big Fish in a bigger pond than high school. Now I got into my first job in a very big company, there are some incredibly talented programmers and researchers here (mostly in departments not related to mine) and for the first time I really feel like I'm incredibly average - I do not want to be average. I read technical books all the time, I try to code on my personal time but I don't feel like that's enough. What can I do to become a leading programmer again in this big company? Is there anything specifically that can be done to make myself known here? This is a very big company so in order to advance you must be very good and shine in your field.

    Read the article

  • What to do with a big image that's slowing website loading down significantly

    - by Dave
    Hi I'm working on a website that's already been designed by someone else. The designer has used a big image (900x700 100KB) which contains a big logo right across the top, then the background for two columns. This image loads every time a page is loaded as it forms the basis for the website. What should I do with it to improve loading time? I'm considering splitting it up into two or more images, especially the logo on the top. Does splitting up images like that decrease loading time in any significant way? Thanks -edit: Also, all the images are .jpg, would changing this to .gif or .png help anything?

    Read the article

  • Git on DreamHost still balking on big files even after I compiled with NO_MMAP=1

    - by fuzzy lollipop
    I compiled Git 1.7.0.3 on DreamHost with the NO_MMAP=1 option, I also supplied that option when I did the "make NO_MMAP=1 install". I have my paths set up correctly, which git reports my ~/bin dir which is correct, git --version returns the correct version. But when I try to do a "git push origin master" with "big" files ~150MB it always fails. Does anyone have an suggestions on how to get DreamHost to accept this "big" files from a git push?

    Read the article

  • Willy Rotstein on Analytics and Social Media in Retail

    - by sarah.taylor(at)oracle.com
    Recently I came across a presentation from Dan Zarrella on "The Science of Retweets. (http://www.slideshare.net/HubSpot/the-science-of-retweets-with-dan-zarrella). It is an insightful, fact-based analysis of how tweets propagate and what makes them successful. The analysis is of course very interesting for those of us interested Tweeting. However, what really caught my attention is how well it illustrates, form a very different angle, some of the issues I am discussing with retailers these days. In particular the opportunities that e-commerce and social media open to those retailers with the appetite and vision to tackle the associated analytical challenges. And these challenges are of course not straightforward.   In his presentation Dan introduces the concept of Observability, I haven't had the opportunity to discuss with Dan his specific definition for the term. However, in practical retail terms, I would say that it means that through social media (and other web channels such as search) we can analyze and track processes by measuring Indicators that were not measurable before. The focus is in identifying patterns across a large number of consumers rather than what a particular individual "Likes".   The potential impact for retailers is huge. It opens the opportunity to monitor changes in consumer preference  and plan the business accordingly. And you can do this almost "real time" rather than through infrequent surveys that provide a "rear view" picture of your consumer behaviour. For instance, you could envision identifying when a particular set of fashion styles are breaking out from the pack, and commit a re-buy. Or you could monitor when the preference for a specific mobile device has declined and hence markdowns should be considered; or how demand for a specific ready-made food typically flows across regions and manage the inventory accordingly. Search, blogging, website and store data may need to be considered in identifying these trends. The data volumes involved are huge (check Andrea Morgan's recent post on "Big Data" in retail) but so are the benefits. As Andrea says, for the first time we can start getting insight into "Why" the business is performing in a certain way rather than just reporting on what is happening. And it is not just about the data volumes. Tackling the challenge also calls for integrated planning systems that can bring data and insight into the context of the Decision Making process Buyers, Merchandisers and Supply Chain managers are following. I strongly believe that only when data and process come together you can move from the anecdotal to systematically improving business performance.   I would love to hear your opinions on these trends and where you think Retail is heading to exploit these topics - please email me: [email protected]

    Read the article

  • Welcome to the Oracle Retail International Blog

    - by sarah.taylor(at)oracle.com
    Welcome to the first post of the new Oracle Retail International Blog. Retail is an international business and today's successful retailers view themselves in the context of a global market. A niche fashion business in Tokyo will learn marketing strategies from the luxury brands of Milan, an independent grocer in Oslo will source the same global brands as a supermarket in Oklahoma, and every retailer in the world will measure their multi-channel operation against the international e-commerce giant Amazon.  Why? Because today's customer is a global customer with unparalleled expectations on choice, price and service. Today's consumers have access to more information on retail than ever before. Technology allows people to shop from their home, their office or from the phone in their pocket, wherever they are and at whatever time suits them. Customers are using the web to search for products and promotions. They are also using the web to develop their voice in commenting on products and services that have delighted or disappointed. In an information rich industry, this customer element creates a new world of data. The best retailers are developing eagle eyes for reading customer activity and turning it into profitable decisions. Ultimately, whether you choose to compete or shop on price, service, product innovation, excellent operations or all of the above - the international world of retail has become an inspiration for all - retailer and consumer alike.  Retail as an industry is growing and diversifying at a faster rate than ever before. Yet it is still the customer who picks the winners and the losers on the retail field. Economic circumstances transform the rules, but it is still the customer who dictates the game, the pace, the price, and the perception of the brand. Wise retailers never rest on their laurels. They are always shopping for ideas on how to improve and differentiate the offer at every touch point to meet the customer's needs better than anyone else and to gain each customer's loyalty at a time when loyalty can be cheap. With this blog, I hope that we might provide a hub for discussion around what unifies retail and how technology supports both the retailer and customer experience. Despite the competitive nature of this market, we hope that this will provide an opportunity to share experiences and lessons learnt with a view that knowledge can only help this industry to grow and develop. At Oracle we've been supporting retailers for many years. Many of us have worked within retail organisations all over the world, myself included. With this in mind, I don't feel it is too bold a statement to say that Oracle understands retail. We wouldn't be so heavily integrated in some of the biggest and most well-known names in retail if we didn't. With this blog, we intend to create a community of international retailers that can exchange ideas and experiences, debate collective challenges and drive a better understanding of this continually evolving industry. Events such as the World Retail Congress and NRF's Big Show bring enormous value to the retail industry providing platforms for discussion and learning but they happen once a year. We wanted to create a platform for discussion on a different level and that like retail, is always on. We hope not only to bring commitment to being not only the infrastructure that brings all of their systems together within a retail business, but an infrastructure that supports the industry internationally to grow and flourish through creating a platform for networking, discussion, creativity, vision and strategy. Please feel free to ask questions or comment using the comments functionality.  You might also want to visit our other Oracle Retail social media sites: Facebook - http://www.facebook.com/oracleretail YouTube - http://www.youtube.com/user/oracleretail Twitter - http://twitter.com/#!/oracleretailInsight-Driven Retailing Blog - http://blogs.oracle.com/retail/

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • slicing up a very big jpg map image , 49000* 34300 pixel

    - by sirvan
    hi i want to write a mapviewer, i must to work small tile of big map image file and there is need to tiling the big image, the problem now is to tiling big image to small tiles (250 * 250 pixel or like this size) so on, i used ImageMagic program to do it but there was problem now is any other programing method or application that do tiling? can i do it with JAI in java? how?

    Read the article

  • Create big buffer on a pic18f with microchip c18 compiler

    - by acemtp
    Using Microchip C18 compiler with a pic18f, I want to create a "big" buffer of 3000 bytes in the program data space. If i put this in the main() (on stack): char tab[127]; I have this error: Error [1300] stack frame too large If I put it in global, I have this error: Error - section '.udata_main.o' can not fit the section. Section '.udata_main.o' length=0x0000007f How to create a big buffer? Do you have tutorial on how to manage big buffer on pic18f with c18?

    Read the article

  • slicing up a very big jpg map image , 140000*125000 pixel

    - by sirvan
    hi i want to write a mapviewer, i must to work small tile of big map image file and there is need to tiling the big image, the problem now is to tiling big image to small tiles (250 * 250 pixel or like this size) so on, i used ImageMagic program to do it but there was problem now is any other programing method or application that do tiling? can i do it with JAI in java? how?

    Read the article

  • Drag big picture in small layer?

    - by Tronic
    Hi, I need a plugin for jquery or another js framework, where I can define a small div where i can drag around a big picture, so i get only a clipping of the picture. any ideas? edit: i try to explain i have a small div, like 600px x 450px. this div behaves like a clipping window for a big picture with like 3000px x 2000px. so i only see a specific cutout of the big picture. and i need to drag that big picture around in this small clipping window! c

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >