Search Results

Search found 31800 results on 1272 pages for 'nrf big show'.

Page 145/1272 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Get Smarter Just By Listening

    - by mark.wilcox
    Occasionally my friends ask me what do I listen/read to keep informed. So I thought I would like to post an update. First - there is an entirely new network being launched by Jason Calacanis called "ThisWeekIn". They have weekly shows on variety of topics including Startups, Android, Twitter, Cloud Computing, Venture Capital and now the iPad. If you want to keep ahead (and really get motivated) - I totally recommend listening to at least This Week in Startups. I also find Cloud Computing helpful. I also like listening to the Android show so that I can see how it's progressing. Because while I love my iPhone/iPad - it's  important to keep the competition in the game up to improve everything. I'm also not opposed to switching to Android if something becomes as nice experience - but so far - my take on Android devices are  - 10 years ago, I would have jumped all over them because of their hackability. But now, I'm in a phase, where I just want these devices to work and most of my creation is in non-programming areas - I find the i* experience better. Second - In terms of general entertaining tech news - I'm a big fan of This Week in Tech. Finally - For a non-geek but very informative show - The Kevin Pollack Show on ThisWeekIn network gets my highest rating. It's basically two-hours of in-depth interview with a wide variety of well-known comedian and movie stars. -- Posted via email from Virtual Identity Dialogue

    Read the article

  • How do I enable in-place tablet pc input panel for non tablet PCs?

    - by yngvedh
    Hi all, Is it possible to enable the in-place tablet PC input panel on a non-tablet PC? I have checked the "For tablet pen input, show the Input Panel icon next to the text entry area when possible" checkbox in the options of the Input Panel. Does this not work because pen input is something different than mouse input? I do have a touch screen, but it just emulates a mouse (moving the cursor, pressing the left mouse button and such). I can get the Input Panel to show manually by starting tabtip.exe and then event the ink works, but I cannot get it to show (itself or it's in-place icon) when I activate text input controls. Does anyone know what's up?

    Read the article

  • What do I need to get a job with a major game company?

    - by MahanGM
    I've been recently working with DirectX and getting familiar with game engines, sub-systems and have done game development for the last 5 years. I have a real question for those whom have worked in larger game making companies before. How is it possible to get to into these big game creators such as Ubisoft, Infinity Ward or EA. I'm not a beginner in my field and I'm going to produce a real nice 2D platform with my team this year, which is the result of 5 years 2D game creation experience. I'm working with prepared engines such as Unity3D or Game Maker software and using .Net with C# to write many tools for our production and proceeding in my way but never had a real engine programming experience 'till now. I'm now reading good books around this topic but I wanted to know: Is it possible to become an employee in big game company by just reading books? I mean beside having an active mind and new ideas and being a solution solver.

    Read the article

  • If unexpected database changes cause you problems – we can help!

    - by Chris Smith
    Have you ever been surprised by an unexpected difference between you database environments? Have you ever found that your Staging database is not the same as your Production database, even though it was the week before? Has an emergency hotfix suddenly appeared in Production over the weekend without your knowledge? Has your client secretly added a couple of indices to their local version of the database to aid performance? Worse still, has a developer ever accidently run a SQL script against the wrong database without noticing their mistake? If you’ve answered “Yes” to any of the above questions then you’ve suffered from ‘drift’. Database drift is where the state of a database (schema, particularly) has moved away from its expected or official state over time. The upshot is that the database is in an unknown or poorly-understood state. Even if these unexpected changes are not destructive, drift can be a big problem when it’s time to release a new version of the database. A deployment to a target database in an unexpected state can error and fail, potentially delaying a vital, time-sensitive update. A big issue with drift is that it can be hard to spot and it can be even harder to determine its provenance. So, before you can deal with an issue caused by drift, you’ll need to know exactly what change has been made, who made it, when they made it and why they made it. Those questions can take a lot of effort to answer. Then you actually need to decide what to do. Do you rollback the change because it was bad? Retrospectively apply it to the Staging environment because it is a required change? Or script the change into version control to get it back in line with your process? Red Gate’s Database Delivery Team have been talking to DBAs, database consultants and database developers to explore the problem of drift. We’ve started to get a really good idea of how big a problem it can be and what database professionals need to know and do, in order to deal with it.  It’s fair to say, we’re pretty excited at the prospect of creating a tool that will really help and we’ve got some great feedback on our initial ideas (see image below).   We’re now well underway with the development of our new drift-spotting product – SQL Lighthouse – and we hope to have a beta release out towards the end of July. What we really need is your help to shape the product into a great tool. So, if database drift is a problem that you’d like help solving and are interested in finding out more about our product, join our mailing list to register your interest in trying out the beta release. Subscribe to our mailing list

    Read the article

  • Double Filter in Excel

    - by Joe
    I'm trying to "stack" filters in excel, so to speak. I want to filter column A to show anything greater than 30 and then I want to filter column B to show the top ten items. When I do this, however, it shows me all rows that fit both criteria (only five records). I want to first fit the criteria for column A and then filter these results to show the top ten items in column B (10 records total). I know that I could just copy the rows from my first filter to a new sheet and then filter the new worksheet, but is there any way to apply both filters so that I don't physically have to delete records this way? Thanks for your help!

    Read the article

  • Autocomplete in Silverlight with Visual Studio 2010

    - by Sayre Collado
    Last week I keep searching on how to use the autocomplete in silverligth with visual studio 2010 but most of the examples that I find they are using a textbox or combobox for the autocomplete. I tried to study those examples and apply to the single autocomplete from tools on my silverlight project. And now this is the result. I will use a database again from my previous post (Silverlight Simple DataBinding in DataGrid) to show how the autocomplete works with database. This is the output: First, this is the setup for my autocomplete: //The tags for autocompletebox on XAML Second, my simple snippets: //Event for the autocomplete to send a text string to my function private void autoCompleteBox1_KeyUp(object sender, KeyEventArgs e) { autoCompleteBox1.Populating += (s, args) => { args.Cancel = true; var c = new Service1Client(); c.GetListByNameCompleted +=new EventHandler(c_GetListByNameCompleted); c.GetListByNameAsync(autoCompleteBox1.Text); }; } //Getting result from database void c_GetListByNameCompleted(object sender, GetListByNameCompletedEventArgs e) { autoCompleteBox1.ItemsSource = e.Result; autoCompleteBox1.PopulateComplete(); } The snippets above will show on how to use the autocompleteBox using the data from database that bind in DataGrid. But what if we want to show the result on DataGrid while the autocomplete changing the items source? Ok just add one line to c_GetListByNameCompleted void c_GetListByNameCompleted(object sender, GetListByNameCompletedEventArgs e) { autoCompleteBox1.ItemsSource = e.Result; autoCompleteBox1.PopulateComplete(); dataGrid1.ItemsSource = e.Result; }

    Read the article

  • Creating java package on ubuntu?

    - by Gaurav_Java
    I am new to java. Here I am trying to create java package. And try to compile it from another directory . But there is an error like bash: /home/gaurav/Desktop/package2/B.java: Permission denied Here is fy first code and directory is /home/Desktop/package/A.java package package1; public class A { interface A1 { void show(); void display(); } } class B extends A { public void show() { System.out.println("This is show method()"); } public void display() { System.out.println("this is Display metthod()"); } } For compilation I did this command it's works fine. pwd is /home/gaurav javac /home/gaurav/Desktop/package/A.java When I try to compile B.java which is in my Other drive /media/gaurav/iPlay/package/B.java package package2; class B { public static void main(String args[]) { System.out.println("Reached in Main method of B"); package1.A Object = new A(); } } I tired this vommand (grom previous working directory) javac -cp /home/gaurav/Desktop/;/media/gaurav/iPlay/package/B.java Error Comes javac -cp /home/gaurav/Desktop/;/media/gaurav/iPlay/package/B.java javac: no source files Usage: javac <options> <source files> use -help for a list of possible options bash: /media/gaurav/iPlay/package/B.java: Permission denied What i am doing wrong? Please it my assignment I am not able to move further without this. I changed permissions.

    Read the article

  • What is the best way to go about testing that we handle failures appropriately?

    - by Earlz
    we're working on error handling in an application. We try to have fairly good automated test coverage. One big problem though is that we don't really know of a way to test some of our error handling. For instance, we need to test that whenever there is an uncaught exception, a message is sent to our server with exception information. The big problem with this is that we strive to never have an uncaught exception(and instead have descriptive error messages). So, how do we test something what we never want to actually happen?

    Read the article

  • Where should I draw the line between unit tests and integration tests? Should they be separate?

    - by Earlz
    I have a small MVC framework I've been working on. It's code base definitely isn't big, but it's not longer just a couple of classes. I finally decided to take the plunge and start writing tests for it(yes, I know I should've been doing that all along, but it's API was super unstable up until now) Anyway, my plan is to make it extremely easy to test, including integration tests. An example integration test would go something along these lines: Fake HTTP request object - MVC framework - HTTP response object - check the response is correct Because this is all doable without any state or special tools(browser automation etc), I could actually do this with ease with regular unit test frameworks(I use NUnit). Now the big question. Where exactly should I draw the line between unit tests and integration tests? Should I only test one class at a time(as much as possible) with unit tests? Also, should integration tests be placed in the same testing project as my unit testing project?

    Read the article

  • Unattended grub configuration after kernel upgrade

    - by bouke
    Today I have been working on automatic deployment of an ubuntu server. I got stuck on automatic updating of the server using apt-get upgrade trying to upgrade to a new kernel. The log looks like this: Setting up linux-image-3.2.0-24-generic (3.2.0-24.39) ... Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst (...) Then a question is presented: Package configuration +---------------------------------¦ +---------------------------------+ ¦ A new version of /boot/grub/menu.lst is available, but the version ¦ ¦ installed currently has been locally modified. ¦ ¦ ¦ ¦ What would you like to do about menu.lst? ¦ ¦ ¦ ¦ install the package maintainer's version ¦ ¦ keep the local version currently installed ¦ ¦ show the differences between the versions ¦ ¦ show a side-by-side difference between the versions ¦ ¦ show a 3-way difference between available versions ¦ ¦ do a 3-way merge between available versions (experimental) ¦ ¦ start a new shell to examine the situation ¦ ¦ ¦ ¦ ¦ ¦ <Ok> ¦ ¦ ¦ +----------------------------------------------------------------------+ The desired outcome would be to select the first option and to continue: Replacing config file /run/grub/menu.lst with new version Updating /boot/grub/menu.lst ... done After running the upgrade by hand, I used debconf-get-selections to inspect the correct answer for the question (see other settings). It seems like update_grub_changeprompt_threeway is the question that should be answered. However, setting this using debconf-set-selections presented me with the same question: debconf-set-selections <<< "grub grub/update_grub_changeprompt_threeway select install_new" apt-get -y dist-upgrade How can this question be automated?

    Read the article

  • S#arp Architecture 1.5 Beta 1 released

    - by AlecWhittington
    Well it is official, I just finished my first release for S#arp Architecture . While this is only a beta release, it does contain some big upgrades and we are hoping to get any bugs handled quickly so that we can get the RTM release completed. This will be a short post, with a more detailed posts coming in the next few days. A big thanks goes out to Billy McCafferty , Michael Aird, Hoang Tang, and everyone else that had a say in this release. Release notes Built on top of ASP.NET MVC 2 RTM release...(read more)

    Read the article

  • Where Facebook Stands Heading Into 2013

    - by Mike Stiles
    In our last blog, we looked at how Twitter is positioned heading into 2013. Now it’s time to take a similar look at Facebook. 2012, for a time at least, seemed to be the era of Facebook-bashing. Between a far-from-smooth IPO, subsequent stock price declines, and anxiety over privacy, the top social network became a target for comedians, politicians, business journalists, and of course those who were prone to Facebook-bash even in the best of times. But amidst the “this is the end of Facebook” headlines, the company kept experimenting, kept testing, kept innovating, and pressing forward, committed as always to the user experience, while concurrently addressing monetization with greater urgency. Facebook enters 2013 with over 1 billion users around the world. Usage grew 41% in Brazil, Russia, Japan, South Korea and India in 2012. In the Middle East and North Africa, an average 21 new signups happen per minute. Engagement and time spent on the site would impress the harshest of critics. Facebook, while not bulletproof, has become such an integrated daily force in users’ lives, it’s getting hard to imagine any future mass rejection. You want to see a company recognizing weaknesses and shoring them up. Mobile was a weakness in 2012 as Facebook was one of many caught by surprise at the speed of user migration to mobile. But new mobile interfaces, better mobile ads, speed upgrades, standalone Messenger and Pages mobile apps, and the big dollar acquisition of Instagram, were a few indicators Facebook won’t play catch-up any more than it has to. As a user, the cool thing about Facebook is, it knows you. The uncool thing about Facebook is, it knows you. The company’s walking a delicate line between the public’s competing desires for customized experiences and privacy. While the company’s working to make privacy options clearer and easier, Facebook’s Paul Adams says data aggregation can move from acting on what a user is engaging with at the moment to a more holistic view of what they’re likely to want at any given time. To help learn about you, there’s Open Graph. Embedded through diverse partnerships, the idea is to surface what you’re doing and what you care about, and help you discover things via your friends’ activities. Facebook’s Director of Engineering, Mike Vernal, says building mobile social apps connected to Facebook in such ways is the next wave of big innovation. Expect to see that fostered in 2013. The Facebook site experience is always evolving. Some users like that about Facebook, others can’t wait to complain about it…on Facebook. The Facebook focal point, the News Feed, is not sacred and is seeing plenty of experimentation with the insertion of modules. From upcoming concerts, events, suggested Pages you might like, to aggregated “most shared” content from social reader apps, plenty could start popping up between those pictures of what your friends had for lunch.  As for which friends’ lunches you see, that’s a function of the mythic EdgeRank…which is also tinkered with. When Facebook changed it in September, Page admins saw reach go down and the high anxiety set in quickly. Engagement, however, held steady. The adjustment was about relevancy over reach. (And oh yeah, reach was something that could be charged for). Facebook wants users to see what they’re most likely to like, based on past usage and interactions. Adding to the “cream must rise to the top” philosophy, they’re now even trying out ordering post comments based on the engagement the comments get. Boy, it’s getting competitive out there for a social engager. Facebook has to make $$$. To do that, they must offer attractive vehicles to marketers. There are a myriad of ad units. But a key Facebook marketing concept is the Sponsored Story. It’s key because it encourages content that’s good, relevant, and performs well organically. If it is, marketing dollars can amplify it and extend its reach. Brands can expect the rollout of a search product and an ad network. That’s a big deal. It takes, as Open Graph does, the power of Facebook’s user data and carries it beyond the Facebook environment into the digital world at large. No one could target like Facebook can, and some analysts think it could double their roughly $5 billion revenue stream. As every potential revenue nook and cranny is explored, there are the users themselves. In addition to Gifts, Facebook thinks users might pay a few bucks to promote their own posts so more of their friends will see them. There’s also word classifieds could be purchased in News Feeds, though they won’t be called classifieds. And that’s where Facebook stands; a wildly popular destination, a part of our culture, with ever increasing functionalities, the biggest of big data, revenue strategies that appeal to marketers without souring the user experience, new challenges as a now public company, ongoing privacy concerns, and innovations that carry Facebook far beyond its own borders. Anyone care to write a “this is the end of Facebook” headline? @mikestilesPhoto via stock.schng

    Read the article

  • SQL SERVER – Storing 64-bit Unsigned Integer Value in Database

    - by Pinal Dave
    Here is a very interesting question I received in an email just another day. Some questions just are so good that it makes me wonder how come I have not faced it first hand. Anyway here is the question - “Pinal, I am migrating my database from MySQL to SQL Server and I have faced unique situation. I have been using Unsigned 64-bit integer in MySQL but when I try to migrate that column to SQL Server, I am facing an issue as there is no datatype which I find appropriate for my column. It is now too late to change the datatype and I need immediate solution. One chain of thought was to change the data type of the column from Unsigned 64-bit (BIGINT) to VARCHAR(n) but that will just change the data type for me such that I will face quite a lot of performance related issues in future. In SQL Server we also have the BIGINT data type but that is Signed 64-bit datatype. BIGINT datatype in SQL Server have range of -2^63 (-9,223,372,036,854,775,808) to 2^63-1 (9,223,372,036,854,775,807). However, my digit is much larger than this number. Is there anyway, I can store my big 64-bit Unsigned Integer without loosing much of the performance of by converting it to VARCHAR.” Very interesting question, for the sake of the argument, we can ask user that there should be no need of such a big number or if you are taking about identity column I really doubt that if your table will grow beyond this table. Here the real question which I found interesting was how to store 64-bit unsigned integer value in SQL Server without converting it to String data type. After thinking a bit, I found a fairly simple answer. I can use NUMERIC data type. I can use NUMERIC(20) datatype for 64-bit unsigned integer value, NUMERIC(10) datatype for 32-bit unsigned integer value and NUMERIC(5) datatype for 16-bit unsigned integer value. Numeric datatype supports 38 maximum of 38 precision. Now here is another thing to keep in mind. Using NUMERIC datatype will indeed accept the 64-bit unsigned integer but in future if you try to enter negative value, it will also allow the same. Hence, you will need to put any additional constraint over column to only accept positive integer there. Here is another big concern, SQL Server will store the number as numeric and will treat that as a positive integer for all the practical purpose. You will have to write in your application logic to interpret that as a 64-bit Unsigned Integer. On another side if you are using unsigned integers in your application, there are good chance that you already have logic taking care of the same. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Datatype

    Read the article

  • UPK Basics Hands On Lab at Oracle Open World Latin America

    - by user581320
    Orrcle Open World Latin America 2012 will be in Sao Paulo, Brazil December fourth through the sixth. There's so much to see and learn from at Oracle OpenWorld : keynotes, technical sessions, Oracle and partner demonstrations, hands-on labs, networking events, and more.  I will be presenting a hands-on lab at the show this year, Introduction to Oracle User Productivity Kit - Learn the Basics in the afternoon on Tuesday December 4th.  This nonstop one hour lab covers topics from Getting Started with UPK to the basics of creating an outline, some typical content and concluding with publishing some of the many outputs UPK is capable of.   If you are planning on attending the show, come by the lab and see what UPK is all about.  I’ll be in Sao Paulo all week to fulfill my need to extend California’s summer by another week (trip bonus) and to meet and discuss all things UPK with our customers and partners.  If you’re not registered for the show there is still time. Check out the Oracle Open World Latin America 2012 web site for all the details. I look forward to seeing you in Sao Paulo!  Peter Maravelias Principal Product Strategy Manager, Oracle UPK 

    Read the article

  • What should I learn to create web-services like ones listed? [closed]

    - by Gerald Blizz
    I am very inspired by websites like imgur, dropbox, screencloud, maybe w3schools...you get my point. Fresh web-services with some new idea, not big portals but something simple yet useful and used by many people, something simple and new. What aspects of my developer career should I focus to be able to build such things on my own if I have enough ideas? (Sure if it ends up being popular I can get more developers to help me and so on, but at first I can do it alone, right?) I am currently a PHP web-developer, I know HTML+CSS+JS+AJAX+JQuery. But even like that there still is web-design, there are a lot of paths: websites for enterprise, startups, webservices, entertainment websites and serious bank/document flow systems, frameworks used for big systems, different approaches for little ones, etcetcetc. Which path should I take to be able to start my own projects like the ones that I listed on top which inspire me?

    Read the article

  • Video Presentation and Demo of Oracle Advanced Analytics & Data Mining

    - by Mike.Hallett(at)Oracle-BI&EPM
    For a video presentation and demonstration of Oracle Advanced Analytics & Data Mining  click here. (This plays a large MP4 file in a browser: access is from Google.docs, and this works best with Google CHROME). This one hour session focuses primarily on the Oracle Data Mining component of the Oracle Advanced Analytics Option along with Oracle R Enterprise and is tied to the Oracle SQL Developer Days virtual and onsite events and is presented by Oracle’s Director for Advanced Analytics, Charlie Berger, covering: Big Data + Big Data Analytics Competing on analytics & value proposition What is data mining? Typical use cases Oracle Data Mining high performance in-database SQL based data mining functions Exadata "smart scan" scoring Oracle Data Miner GUI (an Extension that ships with SQL Developer) Oracle Business Intelligence EE + Oracle Data Mining results/predictions in dashboards Applications "powered by Oracle Data Mining" for factory installed predictive analytics methodologies Oracle R Enterprise Please contact [email protected] should you have any questions. 

    Read the article

  • Recorded YouTube-like presentation and "live" demos of Oracle Advanced Analytics

    - by chberger
    Ever want to just sit and watch a YouTube-like presentation and "live" demos of Oracle Advanced Analytics?  Then ' target=""click here! This 1+ hour long session focuses primarily on the Oracle Data Mining component of the Oracle Advanced Analytics Option and is tied to the Oracle SQL Developer Days virtual and onsite events.   I cover: Big Data + Big Data Analytics Competing on analytics & value proposition What is data mining? Typical use cases Oracle Data Mining high performance in-database SQL based data mining functions Exadata "smart scan" scoring Oracle Data Miner GUI (an Extension that ships with SQL Developer) Oracle Business Intelligence EE + Oracle Data Mining resutls/predictions in dashboards Applications "powered by Oracle Data Mining for factory installed predictive analytics methodologies Oracle R Enterprise Please contact [email protected] should you have any questions.  Hope you enjoy!  Charlie Berger, Sr. Director of Product Management, Oracle Data Mining & Advanced Analytics, Oracle Corporation

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • The Arab HEUG is now a reality, and other random thoughts

    - by user9147039
    I just returned from Doha, Qatar where the first of its kind HEUG (Higher Education User Group) meeting for institutions in the Middle East and North Africa was held at Qatar University and jointly hosted by Damman University from Saudi Arabia. Over 80 delegates attended including representation from education institutions in Oman, Saudi Arabia, Lebanon, and Qatar. There are many other regional HEUG organizations in place (in Australia/New Zealand, APAC, EMEA, as well as smaller regional HEUG’s in the Netherlands, South Africa, and in regions of the US), but it was truly an accomplishment to see this Middle East/North Africa group organize and launch their chapter with a meeting of this quality. To be known as the Arab HEUG going forward, I am excited about the prospects for sharing between the institutions and for the growth of Oracle solutions in the region. In particular the hosts for the event (Qatar University) did a masterful job with logistics and organization, and the quality of the event was a testament to their capabilities. Among the more interesting and enlightening presentations I attended were one from Dammam University on the lessons learned from their implementation of Campus Solutions and transition off of Banner, as well as the use by Qatar University E-business Suite for grants management (both pre-and post-award). The most notable fact coming from this latter presentation was the fit (89%) of e-Business Suite Grants to the university’s requirements. In a few weeks time we will be convening the 5th meeting of the Oracle Education & Research Industry Strategy Council in Redwood Shores (5th since my advent into my current role). The main topics of discussion will be around our Higher Education Applications Strategy for the future (including cloud approaches to ERP (HCM, Finance, and Student Information Systems), how some cases studies on the benefits of leveraging delivered functionality and extensibility in the software (versus customization). On the second day of the event we will turn our attention to Oracle in Research and also budgeting and planning in higher education. Both of these sessions will include significant participation from council members in the form of panel discussions. Our EVP’s for Systems (John Fowler) and for Global Cloud Services and North America application sales (Joanne Olson) will join us for the discussion. I recently read a couple of articles that were surprising to me. The first was from Inside Higher Ed on October 15 entitled, “As colleges prepare for major software upgrades, Kuali tries to woo them from corporate vendors.” It continues to disappointment that after all this time we are still debating whether it is better to build enterprise software through open or community source initiatives when fully functional, flexible, supported, and widely adopted options exist in the marketplace. Over a decade or more ago when these solutions were relatively immature and there was a great deal of turnover in the market I could appreciate the initiatives like Kuali. But let’s not kid ourselves – the real objective of this movement is to counter a perceived predatory commercial software industry. Again, when commercial solutions are deployed as written without significant customization, and standard business processes are adopted, the cost of these solutions (relative to the value delivered) is quite low, and certain much lower than the massive investment (and risk) in in-house developers to support a bespoke community source system. In this era of cost pressures in education and the need to refocus resources on teaching, learning, and research, I believe it’s bordering on irresponsible to continue to pursue open-source ERP. Many of the adopter’s total costs are staggering and have little to show for their efforts and expended resources. The second article was recently in the Chronicle of Higher Education and was entitled “’Big Data’ Is Bunk, Obama Campaign’s Tech Guru Tells University Leaders.” This one was so outrageous I almost don’t want to legitimize it by referencing it here. In the article the writer relays statements made by Harper Reed, President Obama’s former CTO for his 2012 re-election campaign, that big data solutions in education have no relevance and are akin to snake oil. He goes on to state that while he’s a fan of data-driven decision making in education, most of the necessary analysis can be accomplished in Excel spreadsheets. Yeah… right. This is exactly what ails education (higher education in particular). Dozens of shadow and siloed systems running on spreadsheets with limited-to-no enterprise wide initiatives to harness the data-rich environment that is a higher ed institution and transform the data into useable information. I’ll grant Mr. Reed that “Big Data” is overused and hackneyed, but imperatives like improving student success in higher education are classic big data problems that data-mining and predictive analytics can address. Further, higher ed need to be producing a massive amount more data scientists and analysts than are currently in the pipeline, to further this discipline and application of these tools to many many other problems across multiple industries.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >