Search Results

Search found 22308 results on 893 pages for 'floating point'.

Page 547/893 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Creating a loading screen in HTML5

    - by espais
    I am having some issues finding a decent tutorial for generating a loading style screen with regards to HTML5. To be quite honest I'm not sure exactly where to begin... My project is essentially a simple HTML5 game, where I'll be loading in various sprite sheets and tilesets. They'll be reasonably small, but I'd like to show a little loading spinner instead of a blank screen while all the resources are loaded. Much appreciated if somebody could point me in the right direction, be it decent links or code samples to get me going...my Google-fu is lacking today! For clarification, I need to figure out how to load the resources themselves, as opposed to finding a spinner. For instance, how to calculate that X% has loaded.

    Read the article

  • IIS not serving up .dat files.

    - by Stu
    Hi all, I have a ASP MVC web application that uses a plugin to load images and points for a 3d application. When debugging with the the Visual Studio development server the images and the points are served up great... http://i148.photobucket.com/albums/s19/littleniv/Debugging/local.png Second image: same url but iis.png When running in IIS 7 though the .Dat point files do not serve and produce a 404. I've noticed the caching is marked as private in fiddler, but i don't know what this means. Can anyone help? Cheers, Stu

    Read the article

  • Handling auto-incrementing IDENTITY SQL Server fields with LINQ to SQL in C#

    - by Maxim Z.
    I'm building an ASP.NET MVC site that uses LINQ to SQL to connect to SQL Server, where I have a table that has an IDENTITY bigint primary key column that represents an ID. In one of my code methods, I need to create an object of that table to get its ID, which I will place into another object based on another table (FK-to-PK relationship). At what point is the IDENTITY column value generated and how can I obtain it from my code? Is the right approach to: Create the object that has the IDENTITY column Do an InsertOnSubmit() and SubmitChanges() to submit the object to the database table Get the value of the ID property of the object

    Read the article

  • Reloading Rails Directories: Not Lib!

    - by yar
    I have checked out several questions on this, including all of those you see next to the question. Unfortunately, I'm not working with a plugin, and I don't want to work in lib. I have a directory called File.join(Rails.root, 'classes') and I'd like the classes in this directory to reload automatically in dev. In my environment.rb I have this line config.load_paths << File.join(Rails.root, 'classes') which works fine and blows up if the path isn't there. The reloading line in my development.rb also works fine require_dependency File.join(Rails.root, 'classes', 'blah.rb') which blows up if the file is not there (a good sign). However, the file doesn't reload. This all works if the file is in the root of lib and I use the require_dependency line, but my whole point is to get stuff out of lib as suggested here.

    Read the article

  • How to reset lstset settings for listings?

    - by drittesbinom
    At the top of my tex document, I set my sourcecode listing format by \lstset{language=java} \lstset{numbers=left, numberstyle=\tiny, numbersep=5pt, breaklines=true} \lstset{emph={square}, emphstyle=\color{red}, emph={[2]root,base}, emphstyle={[2]\color{blue}}} because I merely list Java source code. At one point in my document, I had to reformat for a single listing by \lstset{commentstyle=\footnotesize\textit} \lstset{basicstyle=\ttfamily\fontsize{11}{12}\selectfont} \lstset{literate= {!=} {$\neq$}{2}} Now I have the problem that my previous Java formatting for listings is destroyed, and I dont know how to reset the lst settings to default. How can this be avoided?

    Read the article

  • Revisions: algorithm and data structure

    - by SODA
    Hi, I need ideas for structuring and processing data with revisions. For example, I have a database of objects (e.g. cars). Each object has a number of properties, which can be arbitrary, so there's no a set schema to describe these objects. These objects are probably saved as key-value pairs. Now I need to change property of an object. I don't want to completely rewrite it - I want to be able to go back and see history of changes to these properties, that's why I want to add new property and keep the old one (so I guess a timestamp would do the job of telling which property is the latest). At the same time I want to be able to get info about any object in a snap, with only latest versions of each of the properties. Any ideas what would be the best approach? At least please point me in the right direction. Thanks!

    Read the article

  • What is the benefit of using int instead of bigint in this case?

    - by Yeti
    (MYSQL n00b) I have 3 tables: id = int(10), photo_id = bigint(20) PHOTO records limited to 3 million PHOTO: +-------+-----------------+ | id | photo_num | +-------+-----------------+ | 1 | 123456789123 | | 2 | 987654321987 | | 3 | 5432167894321 | +-------+-----------------+ COLOR: +-------+-----------------+---------+ | id | photo_num | color | +-------+-----------------+---------+ | 1 | 123456789123 | red | | 2 | 987654321987 | blue | | 3 | 5432167894321 | green | +-------+-----------------+---------+ SIZE: +-------+-----------------+---------+ | id | photo_num | size | +-------+-----------------+---------+ | 1 | 123456789123 | large | | 2 | 987654321987 | small | | 3 | 5432167894321 | medium | +-------+-----------------+---------+ Both COLOR and SIZE tables will have several million records. Q1: Is it better to change photo_num on COLOR and SIZE to int(10) and point it to PHOTO's id? Right now I use these: (PHOTO is no where in the picture) SELECT * from COLOR WHERE photo_num='xxx'; SELECT * from SIZE WHERE photo_num='xxx'; Q2: How will the SELECT query look if PHOTO id was used in COLOR, SIZE?

    Read the article

  • Failed to convert parameter value from a Guid to a String

    - by user320460
    Hello, I am at the end of my knowledge and googled for the answer too but no luck :/ Week ago everything worked well. I did a revert on the repository, recreated the tableadapter etc... nothing helped. When I try to save in my application I get an SystemInvalidCastException at this point: PersonListDataSet.cs: partial class P_GroupTableAdapter { public int Update(PersonListDataSet.P_GroupDataTable dataTable, string userId) { this.Adapter.InsertCommand.Parameters["@userId"].Value = userId; this.Adapter.DeleteCommand.Parameters["@userId"].Value = userId; this.Adapter.UpdateCommand.Parameters["@userId"].Value = userId; return this.Update(dataTable); **<-- Exception occurs here** } } Everything is stuck here because a Guid - and I checked the datatable preview with the magnifier tool its really a true Guid in the column of the datatable - can not be converted to a string ??? How can that happen?

    Read the article

  • ColdFusion static key/value list?

    - by richardtallent
    I have a database table that is a dictionary of defined terms -- key, value. I want to load the dictionary in the application scope from the database, and keep it there for performance (it doesn't change). I gather this is probably some sort of "struct," but I'm extremely new to ColdFusion (helping out another team). Then, I'd like to do some simple string replacement on some strings being output to the browser, looping through the defined terms and replacing the terms with some HTML to define the terms (a hover or a link, details to be worked out later, not important). Can anyone point me in the right direction of: How to define the stucture (if that is what I need for a key/value pair list) How to query at the application start-up and reuse the list properly The best way to do the string replacement

    Read the article

  • How can two threads access a common array of buffers with minimal blocking ? (c#)

    - by Jelly Amma
    Hello, I'm working on an image processing application where I have two threads on top of my main thread: 1 - CameraThread that captures images from the webcam and writes them into a buffer 2 - ImageProcessingThread that takes the latest image from that buffer for filtering. The reason why this is multithreaded is because speed is critical and I need to have CameraThread to keep grabbing pictures and making the latest capture ready to pick up by ImageProcessingThread while it's still processing the previous image. My problem is about finding a fast and thread-safe way to access that common buffer and I've figured that, ideally, it should be a triple buffer (image[3]) so that if ImageProcessingThread is slow, then CameraThread can keep on writing on the two other images and vice versa. What sort of locking mechanism would be the most appropriate for this to be thread-safe ? I looked at the lock statement but it seems like it would make a thread block-waiting for another one to be finished and that would be against the point of triple buffering. Thanks in advance for any idea or advice. J.

    Read the article

  • Microsoft Access to SQL Server - synchronization

    - by David Pfeffer
    I have a client that uses a point-of-sale solution involving an Access database for its back-end storage. I am trying to provide this client with a service that involves, for SLA reasons, the need to copy parts of this Access database into tables in my own database server which runs SQL Server 2008. I need to do this on a periodic basis, probably about 5 times a day. I do have VPN connectivity to the client. Is there an easy programmatic way to do this, or an available tool? I don't want to handcraft what I assume is a relatively common task.

    Read the article

  • What are the uses of svn copy?

    - by nav.jdwdw
    Example: $ svn copy foo.txt bar.txt A bar.txt When would you use this technique, and why? Will this command (taken from svn's "red book") creates a copy of <foo.txt> while preserving the history of it to be shared with <bar.txt>? If I'm changing <bar.txt>, what will happen to <foo.txt>? What are the equivalents to this in other modern systems (Clearcase, Accurev, Perforce)? Clarification: Let me emphasize the point I'm searching for: Is this kind of branching out on a file level? What happens if you use it in the same branch, i.e. create a copy of a file and than start changing that new file. all in the same branch? I understand that it is also used for tagging but what is interesting me is what to expect when performing <svn copy> On the file level

    Read the article

  • Where should I store user config data? Specificaly the path to the data file?

    - by jamone
    I have an app using a SQLite db, and I need the ability for the user to move the data file and point the app to where it moved to. I used the Entity Framework to create the model, and by default it puts the connection string in the App.Config file. From what I've read if I make changes to the connection string there then they won't take effect until the app is restarted. That seems a bit clunky for my use. I see how I can init my model and pass in a custom string but I'm unsure what the best practice is in where to store basic user prefrences such as this? Ini, Registry, somewhere else? I don't want the user to have to "Open" the file each time, just when it relocates and then the app will try to auto open from then on.

    Read the article

  • Learning about version control systems, Git, SVN

    - by anijhaw
    I am a basic SVN user now trying to learn GIT for a new position. I am trying the usual reading docs and watching videos. However after doing all that I still feel that there is a lot that I do not know. I was wondering if there is a place like project Euler for programming languages, that provides a series of exercises that you can do just to increase your confidence and test your knowledge about a version control system. Something thats generic enough and gets you up to speed with how to do basic things. This could also serve as a comparison point of sorts between multiple VCSs, that would show what things are easy in which VCS. If there is nothing I was planning to document my journey in learning GIT and the create an exercise of this sort.

    Read the article

  • Changing ylim (axis limits) drops data falling outside range. How can this be prevented?

    - by Alex Holcombe
    df <- data.frame(age=c(10,10,20,20,25,25,25),veg=c(0,1,0,1,1,0,1)) g=ggplot(data=df,aes(x=age,y=veg)) g=g+stat_summary(fun.y=mean,geom="point") Points reflect mean of veg at each age, which is what I expected and want to preserve after changing axis limits with the command below. g=g+ylim(0.2,1) Changing axis limits with the above command unfortunately causes veg==0 subset to be dropped from the data, yielding "Warning message: Removed 4 rows containing missing values (stat_summary)" This is bad because now the data plot (stat_summary mean) omits the veg==0 points. How can this be prevented? I simply want to avoid showing the empty part of the plot- the ordinate from 0 to .2, but not drop the associated data from the stat_summary calculation.

    Read the article

  • MySQL date query only returns one year, when multiple exist

    - by Bowman
    I'm a part-time designer/developer with a part-time photography business. I've got a database of photos with various bits of metadata attached. I want to query the database and return a list of the years that photos were taken, and the quantity of photos that were taken in that year. In short, I want a list that looks like this: 2010 (35 photos) 2009 (67 photos) 2008 (48 photos) Here's the query I'm using: SELECT YEAR(date) AS year, COUNT(filename) as quantity FROM photos WHERE visible='1' GROUP BY 'year' ORDER BY 'year' DESC Instead of churning out all the possible years (the database includes photos from 2010-2008), this is the sole result: 2010 (35 photos) I've tried a lot of different syntax but at this point I'm giving in and asking for help!

    Read the article

  • SQLite Transaction fills a table BEFORE the transaction is commited

    - by user1500403
    Halo I have a code that creates a datatable (in memory) from a select SQL statement. However I realised that this datatable is filling during the procedure rather as a result of the transaction comit statment, it does the job but its slow. WHat amI doing wrong ? Inalready.Clear() 'clears a dictionary Using connection As New SQLite.SQLiteConnection(conectionString) connection.Open() Dim sqliteTran As SQLite.SQLiteTransaction = connection.BeginTransaction() Try oMainQueryR = "SELECT * FROM detailstable Where name= :name AND Breed= :Breed" Dim cmdSQLite As SQLite.SQLiteCommand = connection.CreateCommand() Dim oAdapter As New SQLite.SQLiteDataAdapter(cmdSQLite) With cmdSQLite .CommandType = CommandType.Text .CommandText = oMainQueryR .Parameters.Add(":name", SqlDbType.VarChar) .Parameters.Add(":Breed", SqlDbType.VarChar) End With Dim c As Long = 0 For Each row As DataRow In list.Rows 'this is the list with 500 names If Inalready.ContainsKey(row.Item("name")) Then Else c = c + 1 Form1.TextBox1.Text = " Fill .... " & c Application.DoEvents() Inalready.Add(row.Item("name"), row.Item("Breed")) cmdSQLite.Parameters(":name").Value = row.Item("name") cmdSQLite.Parameters(":Breed").Value = row.Item("Breed") oAdapter.Fill(newdetailstable) End If Next oAdapter.FillSchema(newdetailstable, SchemaType.Source) Dim z = newdetailstable.Rows.Count 'At this point the newdetailstable is already filled up and I havent even comited the transaction ' sqliteTran.Commit() Catch ex As Exception End Try End Using

    Read the article

  • Can a binary tree or tree be always represented in a Database table as 1 table and self-referencing?

    - by Jian Lin
    I didn't feel this rule before, but it seems that a binary tree or any tree (each node can have many children but children cannot point back to any parent), then this data structure can be represented as 1 table in a database, with each row having an ID for itself and a parentID that points back to the parent node. That is in fact the classical Employee - Manager diagram: one boss can have many people under him... and each person under him can have n people under him, etc. This is a tree structure and is represented in database books as a common example as a single table Employee.

    Read the article

  • how can multiple trailing slashes can be removed from an url in Ruby

    - by splintercell
    Hello, What i'm trying to achieve here is lets say we have two example urls: url1 "http://emy.dod.com/kaskaa/dkaiad/amaa//////////" & url2 = "http://www.example.com/". How can I extract the striped down urls? url1 : "http://emy.dod.com/kaskaa/dkaiad/amaa" & url2 to "http://http://www.example.com"? URI.parse in ruby sanitizes certain type of malformed url but is ineffective in this case. If we use regex then /^(.*)\/$/ removes a single slash (/) from url1 & is ineffective for url2. Is anybody aware of how to handle this type of url parsing? The point here is I dont want my system to have "http://www.example.com/" & "http://www.example.com" being treated as two different urls. And same goes for "http://emy.dod.com/kaskaa/dkaiad/amaa////" & "http://emy.dod.com/kaskaa/dkaiad/amaa/" cheers, -dg

    Read the article

  • How to connect 2 virtual hosts running on the same machine?

    - by Gabrielle
    I have 2 virtual hosts running on my Windows XP laptop. One is Ubuntu running inside vmware player. The other is MS virtual PC (so I can test with IE6 ). The Ubuntu virtual host is running my web application with apache. I can point my browser on my laptop at the Ubuntu IP and view my web app. I read this post http://stackoverflow.com/questions/197792/how-to-connect-to-host-machine-from-within-virtual-pc-image and was able to get my Virtual PC to ping my physical machine using the loopback adapter. But I'm stuck on getting my Virtual PC to see my web application running in the Ubuntu vmware player host. I appreciate any suggestions.

    Read the article

  • Maintaining sort order of database table rows

    - by Lox
    Say I have at database table containing information about a news article in each row. The table has an integer "sort" column to dictate the order in which the articles are to be presented on a web site. How do I best implement and maintain this sort order. The problem I want to avoid is having the the articles numbered 1,2,3,4,..,100 and when article number 50 suddenly becomes interesting it gets its sort number set to 1 and then all articles between them must have their sort number increased by one. Sure, setting initial sort numbers to 100,200,300,400 etc. leaves some space for moving around but at some point it will break. Is there a correct way to do this, maybe a completely different approach? Added-1: All article titles are shown in a list linking to the contents, so yes all sorted items are show at once. Added-2: An item is not necessarily moved to the top of the list; any item can be placed anywhere in the ordered list.

    Read the article

  • Possible to map a new file extension to an existing handler in ASP.NET?

    - by Dave
    I have a scenario where my application is going to be publishing services that are consumed by both PC's and mobile devices, and I have a HTTPModule that I want to only perform work on only the mobile requests. So I thought the best way of doing this was to point the mobile requests to a different file extension and have the HTTPModule decide to process only if the request targets this new extension. I don't need a custom HTTPHandler for the new extension; I want to program the services like a normal .ASMX service, just with a different extension. First, can I do this? If so, how do I do it so that requests to my new extension are handled just like .ASMX requests? Second, is this the right approach? Am I going about separating and managing the mobile vs. PC requests the wrong way? Thanks, Dave

    Read the article

  • How to define supported BlackBerry OS versions and models for application?

    - by Lyubomyr Dutko
    After company wins a project it is usual to mention in contract what devices are supported and what OS versions are supported. But taking into account BlackBerry it appears sometimes to be tricky, as you can have the same device model, but two and(or) more different OS versions (or within same OS different package versions). And in this situation application may need to be updated. So the main question here is what is expected to be mentioned in contract? Could you please share some your experience of resolving such problems? So as a good example can be case of video playback issue on Storm: some issue exists on 5.0.0.XXX (network provider A) and doesn't exist on 5.0.0.YYY (network provider B), or could be following: 5.0.0.XXX1 (network provider A) - issue exist 5.0.0.XXX2 (network provider A) - issue doesn't exist The point here is to define some boundaries of development company responsibility

    Read the article

  • Where do current (not HTML5) browsers stand on clipboard support for non-text data?

    - by John
    I don't really know where the line is between the browser itself and HTML/JS here. But let's say I want to write a web-app where I can copy a chunk of data from MSPaint (select, CTRL+c) and paste it into a web-page in some way... ultimately the point is the data goes to the server without me having to save it as a file first. Where are the problems here - browsers or client-side technologies? For instance if JS can't do it, could Flex/Silverlight? Advice on future technology welcome, but doesn't really answer the question - where are we right now with IE/FF/Chrome and JS/Flex/SL?

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >