Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 666/2386 | < Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >

  • XNA hlsl tex2D() only reads 3 channels from normal maps and specular maps

    - by cubrman
    Our engine uses deferred rendering and at the main draw phase gathers plenty of data from the objects it draws. In order to save on tex2D calls, we packed our objects' specular maps with all sorts of data, so three out of four channels are already taken. To make it clear: I am talking about the assets that come with the models and are stored in their material's Specular Level channel, not about the RenderTarget. So now I need another information to be stored in the alpha channel, but I cannot make the shader to read it properly! Nomatter what I write into alpha it ends up being 1 (255)! I tried: saving the textures in PNG/TGA formats. turning off pre-computed alpha in model's properties. Out of every texture available to me (we use Diffuse map, Normal Map and Specular Map) I was only able to read alpha successfully from the Diffuse Map! Here is how I add specular and normal maps to my model's material in the content processor: if (geometry.Material.Textures.ContainsKey(normalMapKey)) { ExternalReference<TextureContent> texRef = geometry.Material.Textures[normalMapKey]; geometry.Material.Textures.Remove("NormalMap"); geometry.Material.Textures.Add("NormalMap", texRef); } ... foreach (KeyValuePair<String, ExternalReference<TextureContent>> texture in material.Textures) { if ((texture.Key == "Texture") || (texture.Key == "NormalMap") || (texture.Key == "SpecularMap")) mat.Textures.Add(texture.Key, texture.Value); } In the shader I obviously use: float4 data = tex2D(specularMapSampler, TexCoords); so data.a is always 1 in my case, could you suggest a reason?

    Read the article

  • Restore Failure from Ubuntu One

    - by Qawi Robinson
    Had to do a reinstall of Ubuntu 12 after 13.10 failed. Lost all my data, but I remembered that I had data backed up to Ubuntu One. It recognized my previous backups but I got errors and a restore failure when I went to restore the data. This is what I got. Can anyone make heads or tails of this? I still don't have my data. Thanks. Traceback (most recent call last): File "/usr/bin/duplicity", line 1412, in <module> with_tempdir(main) File "/usr/bin/duplicity", line 1405, in with_tempdir fn() File "/usr/bin/duplicity", line 1339, in main restore(col_stats) File "/usr/bin/duplicity", line 630, in restore restore_get_patched_rop_iter(col_stats)): File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 522, in Write_ROPaths for ropath in rop_iter: File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 495, in integrate_patch_iters final_ropath = patch_seq2ropath( normalize_ps( patch_seq ) ) File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 475, in patch_seq2ropath misc.copyfileobj( current_file, tempfp ) File "/usr/lib/python2.7/dist-packages/duplicity/misc.py", line 166, in copyfileobj buf = infp.read(blocksize) File "/usr/lib/python2.7/dist-packages/duplicity/librsync.py", line 80, in read self._add_to_outbuf_once() File "/usr/lib/python2.7/dist-packages/duplicity/librsync.py", line 94, in _add_to_outbuf_once raise librsyncError(str(e)) librsyncError: librsync error 103 while in patch cycle

    Read the article

  • How do I architect 2 plugins that share a common component?

    - by James
    I have an object that takes in data and spits out a transformed output, called IBaseItem. I also have two parsers, IParserA and IParserB. These parsers transform external data (in format dataA and dataB respectively) to a format usable by my IBaseItem (baseData). I want to create 2 systems, one that works with dataA and one that works with dataB. They will allow the user to enter data and match it to the right plugins/implementations and transform the data to outData. I want to write these traffic cops myself, but have other people provide the parsers and baseitem logic, and and as such am implementing these items as plugins (hence the use of interfaces). Other programmers can choose to implement 1 or both parsers. Q: How should I structure the way base items and parsers are associated, stored, and loaded into each of my programs? Class Relations: What I've Tried: Initially I though there should be a different dll for each of my 2 traffic cops, that each have a parser and baseitem in them. However, the duplication of baseitem logic doesn't seem right (especially if the base item logic changes). I then thought the base items could all have their own dll, and then somehow associate parsers and baseitems (guids?), but I don't know if implementing the overhead id/association is adding too much complexion.

    Read the article

  • Defining formula through user interface in user form

    - by BriskLabs Pakistan
    I am a student and developing a simple assignment - windows form application in visual studio 2010. The application is suppose to construct formulas as per user requirement. The process: It has to pick data from columns of Microsoft Access database and the user should be able to pick the data by column name like we do in a drop down menu. and create reusable formulas in it ( configure it once and can change it again). followings are column titles from database that can be picked for example. e.g Col -1 : Marks in Maths Col -2 : Total Marks in Maths Col -3 : Marks in science Col -4 : Total marks in science Finally we should be able to construct any formula in the UI like (Col 1 + Col 3 ) / ( col 2 + col 4) = Formula 1 once this is formula is set saved and a name is assigned to it by user. he/she can use the formula and results shall appear in a window below. i.e He would be able to calculate his desired figures (formula) by only manipulating underlying data on the UI layer....choose the data for a period and apply the formula and get the answer Problem: It looks like I have to create an app where rules are set through UI....... this means no stored procedures are required in SQL.... please suggest the right approach.

    Read the article

  • Monitoring Baseline

    - by Grant Fritchey
    Knowing what's happening on your servers is important, that's monitoring. Knowing what happened on your server is establishing a baseline. You need to do both. I really enjoyed this blog post by Ted Krueger (blog|twitter). It's not enough to know what happened in the last hour or yesterday, you need to compare today to last week, especially if you released software this weekend. You need to compare today to 30 days ago in order to begin to establish future projections. How your data has changed over 30 days is a great indicator how it's going to change for the next 30. No, it's not perfect, but predicting the future is not exactly a science, just ask your local weatherman. Red Gate's SQL Monitor can show you the last week, the last 30 days, the last year, or all data you've collected (if you choose to keep a year's worth of data or more, please have PLENTY of storage standing by). You have a lot of choice and control here over how much data you store. Here's the configuration window showing how you can set this up: This is for version 2.3 of SQL Monitor, so if you're running an older version, you might want to update. The key point is, a baseline simply represents a moment in time in your server. The ability to compare now to then is what you're looking for in order to really have a useful baseline as Ted lays out so well in his post.

    Read the article

  • Reconstruct a file from a TCP stream

    - by Abhishek Chanda
    I have a client and a server and a third box which sees all packets from the server to the client (but not the other way around). Now when the client requests a file from the server (over HTTP), the third box sees the response. I am trying to reconstruct the file there. I am using libpcap to capture TCP datagrams and trying to reconstruct the file there. Here is what I did Listen for packets on an interface Group all packets which have the same ACK number Sort the group based on SEQ number Extract data from each packet and combine them and write to the disk The problem is, the file thus generated is not exactly the same as the original file. Does everything sound correct here? Some more details: I am using C++ The packet data is being stored as std::vector<char> I did change the byte order while reading the ack number and seq number from the packet using ntohl I am not sure if I need to change the byte order for the data as well. I tried to reverse the data from each packet before combining them, even that did not work. Is there something I am missing?

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • Hibernation to a swap file 12.04, fragfile output

    - by MrHug10
    I've been trying for some time now, to get hibernation working in Ubuntu 12.04 on my Dell XPS17. I dualboot Windows 7 and Ubuntu, each having their own partition and one shared partition for all my data and documents. As I would like to be able to swtich from ubuntu to windows without losing all the things I was currently doing in Ubuntu, I would like to be able to use hibernation. In order to achieve this I've followed the information at http://ubuntuforums.org/showthread.php?t=1042946. Only instead of creating my swap file on my linux partition (which is formatted: ext4), I've chosen to create one on my shared partition (which is formatted: ntfs). There is a problem with this though (at least, that's what I think the problem is), because when I call: sudo filefrag -v /media/data/Ubuntu_Swap_Space/6GiB.swap, I get the following output: Filesystem type is: 65735546 File size of /media/Data/Ubuntu_Swap_Space/6GiB.swap is 6442450944 (1572864 blocks, blocksize 4096) Discontinuity: Block 22 is at 25829097 (was 232498) /media/Data/Ubuntu_Swap_Space/6GiB.swap: 2 extents found So I'm not sure what I need to fill in as an offset to follow the rest of the earlier mentioned information. I've tried both the location of block 22 and the number that is listed after that, but when I then try sudo pm-hibernate nothing happens and this shows up in my /var/log/pm-suspend.log s2disk: Could not use the resume device (try swapon -a). Reason: No such device Hope someone can help me out with this! If you need more information about anything, please let me know!

    Read the article

  • Conflict resolution for two-way sync

    - by K.Steff
    How do you manage two-way synchronization between a 'main' database server and many 'secondary' servers, in particular conflict resolution, assuming a connection is not always available? For example, I have an mobile app that uses CoreData as the 'database' on the iOS and I'd like to allow users to edit the contents without Internet connection. In the same time, this information is available on a website the devices will connect to. What do I do if/when the data on the two DB servers is in conflict? (I refer to CoreData as a DB server, though I am aware it is something slightly different.) Are there any general strategies for dealing with this sort of issue? These are the options I can think of: 1. Always use the client-side data as higher-priority 2. Same for server-side 3. Try to resolve conflicts by marking each field's edit timestamp and taking the latest edit Though I'm certain the 3rd option will open room for some devastating data corruption. I'm aware that the CAP theorem concerns this, but I only want eventual consistency, so it doesn't rule it out completely, right? Related question: Best practice patterns for two-way data synchronization. The second answer to this question says it probably can't be done.

    Read the article

  • Two Upcoming Server Virtualization Webcasts

    - by Chris Kawalek
    We have a couple of interesting server virtualization webcasts coming up that you might be interested in. Have a look:  Webcast 1: October 23rd, 9 am PST Virtualized Infrastructure Simplified with Oracle VM and NetApp Storage and Data Management Solutions Point-and-Click Interface Deploys Virtualized Data Infrastructure in Minutes  Provisioning and deploying a virtual data infrastructure can be costly, time-consuming, and prone to error. Oracle VM and NetApp joint solutions, however, give you a single point-and-click interface to deploy your virtualized data infrastructure seamlessly in minutes. Join us in this live webcast to learn more from product experts and view a product demo. Register (for free!) here.  Webcast 2: November 7th, 9 am PST Report Shows Oracle VM Up to 10x Faster than VMware vSphere 5 in Time to Deployment Time is your IT department’s greatest commodity. So when a new report reveals that your IT staff can deploy Oracle Real Application Clusters (Oracle RAC) up to 10 times faster than a traditional install performed with VMware vSphere 5, it’s newsworthy. Join us in this live webcast to learn how you can realize your time savings. Register (for free!) here. 

    Read the article

  • Android From Local DB (DAO) to Server sync (JSON) - Design issue

    - by Taiko
    I sync data between my local DB and a Server. I'm looking for the cleanest way to modelise all of this. I have a com.something.db package That contains a Data Helper and couple of DAO classes that represents objects stored in the db (I didn't write that part) com.something.db --public DataHelper --public Employee @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary etc... (all in all 50 fields) I have a com.something.sync package That contains all the implementation detail on how to send data to the server. It boils down to a ConnectionManager that is fed by different classes that implements a 'Request' interface com.something.sync --public interface ConnectionManager --package ConnectionManagerImpl --public interface Request --package LoginRequest --package GetEmployeesRequest My issue is, at some point in the sync process, I have to JSONise and de-JSONise my data (E.g. the Employee class). But I really don't feel like having the same Employee class be responsible for both his JSONisation and his actual representation inside the local database. It really doesn't feel right, because I carefully decoupled the rest, I am only stuck on this JSON thing. What should I do ? Should I write 3 Employee classes ? EmployeeDB @DatabaseField e.g. "name" will be an actual column name in the DB -name @DatabaseField -salary -etc... 50 fields EmployeeInterface -getName -getSalary -etc... 50 fields EmployeeJSON -JSON_KEY_NAME = "name" The JSON key happens to be the same as the table name, but it isn't requirement -name -JSON_KEY_SALARY = "salary" -salary -etc... 50 fields It feels like a lot of duplicates. Is there a common pattern I can use there ?

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Java enum pairs / "subenum" or what exactly?

    - by vemalsar
    I have an RPG-style Item class and I stored the type of the item in enum (itemType.sword). I want to store subtype too (itemSubtype.long), but I want to express the relation between two data type (sword can be long, short etc. but shield can't be long or short, only round, tower etc). I know this is wrong source code but similar what I want: enum type { sword; } //not valid code! enum swordSubtype extends type.sword { short, long } Question: How can I define this connection between two data type (or more exactly: two value of the data types), what is the most simple and standard way? Array-like data with all valid (itemType,itemSubtype) enum pairs or (itemType,itemSubtype[]) so more subtype for one type, it would be the best. OK but how can I construct this simplest way? Special enum with "subenum" set or second level enum or anything else if it does exists 2 dimensional "canBePairs" array, itemType and itemSubtype dimensions with all type and subtype and boolean elements, "true" means itemType (first dimension) and itemSubtype (second dimension) are okay, "false" means not okay Other better idea Thank you very much!

    Read the article

  • SQL Server 2008 R2 Launch Event - Montreal

    - by guybarrette
    If you’re into SQL Server, you may want to attend the free 2008 R2 launch event that will take place on May 26th, 2010 in Montreal. Agenda: 8:00 - 9:00am : Registration and Breakfast 9:00 – 9:15am:  Welcome and Introductions 9:15 – 10:00am:  Keynote Presentation 10:00 - 10:15am: Morning break 10:15 – 11:45am: SQL Server Presentation 11:45 – 12:45pm: Lunch 12:45 – 1:45pm: Track Session 1 1:45 – 2:45pm: Track Session 2 2:45 – 3:00pm: Afternoon break 3:00 - 4:00pm: Track Session 3 Track Descriptions DBA TRACK Session 1: Ensure Business Continuity with SQL Server 2008 R2,  Windows Server 2008 & Hyper-V Live Migration Session 2: Simplify management of your SQL Server data platform with Multi-server Management Session 3: Deliver unprecedented access to business-critical data at a lower TCO with SQL Server 2008 R2 Parallel Data Warehouse BI TRACK Session1: Enable Managed Self-service BI with Power Pivot for Excel and SharePoint 2010 Session 2: Achieve Rapid Reporting with Reporting Services and Report Builder 3.0 Session 3: Importance of Master Data Management Dev - Visual Studio TRACK Session 1: Developing SQL Applications with Visual Studio 2010 Session 2:Managing Change for SQL Server applications using Team Foundation Server  Session 3: Targeting SQL Azure using Visual Studio   Register here var addthis_pub="guybarrette";

    Read the article

  • Accessing Secure Web Services from ADF Mobile

    - by Shay Shmeltzer
    Most of the enterprise Web services you'll access are going to be secured - meaning they'll require you to pass a user/password in order to get to their data.  If you never created a secured Web service, it's simple in JDeveloper! For the below video I just right clicked on a Java class that I exposed as a Web service, and chose  "Web Service Properties" and then checked the "oracle/wss_username_token_service_policy" box from the list of options (that's the option supported by ADF Mobile right now): In the demo below we are going to use a "remote" login server that does the authentication of the user/pass.The easiest way to "create" a remote login server is to create a "regular" web ADF application, secure it, and deploy it on a server. The secured ADF application can just require ADF Authentication with a simple HTTP Basic Authentication - basically the next two images in the Application->Secure->Configure ADF Security menu wizard. ok - so now you have a secured ADF application - deploy it on a server and get the URL for that application.  From this point on you'll see the process in the video which deals with the configuration of your ADF Mobile app. First you'll need to enable security for your ADF mobile application, so it will prompt users to provide a user/pass combination. You'll also need to configure security on specific features. And you can have them use remote login pointing to your regular secured ADF application. Next define your Web service data control. Right click on the web service data control to "define Web Service Security". You'll also need to define the adfCredentialStoreKey property for the Web Service data control in the connections.xml file. This should be it. Here is the flow: If you haven't already - you can read more about this in the Mobile developer guide, and Andrejus has a sample for you.

    Read the article

  • Code execution time out occationally

    - by Athul k Surendran
    I am working on an e-commerce website. There is a case where I need to fetch the whole data in database through third-party API and send it to an indexing engine. This third-party API has many functions like getproducts, getproductprice, etc., and each of that functions will return the data in XML format. From there I will take charge, I will use various API calls and will handle the XML data with XSLT. And will write to a CSV file. This file will be uploaded to an Indexing engine. Right now I have details of 8000 products to feed the engine, and almost all time the this process takes about 15 min to complete, and sometimes fails. I can't find a better solution for this. I am thinking about handling the XML data in C# itself and get rid of XSLT. As I think, XSLT is far slower than C#. Is it a good Idea? Or what else I can do to solve this issue?

    Read the article

  • How do I keep a 3D model on the screen in OpenGL?

    - by NoobScratcher
    I'm trying to keep a 3D model on the screen by placing my glDrawElement functions inside the draw function with the declarations at the top of .cpp. When I render the model, the model attaches it self to the current vertex buffer object. This is because my whole graphical user interface is in 2D quads except the window frame. Is there a way to avoid this from happening? or any common causes of this? Creating the file object: int index = IndexAssigner(1, 1); //make a fileobject and store list and the index of that list in a c string ifstream file (list[index].c_str() ); //Make another string //string line; points.push_back(Point()); Point p; int face[4]; Model rendering code: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); cout << "Size Of Point" << sizeof(Point) << endl; GLuint vertexbuffer; glGenVertexArrays(1, &vao[3]); glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, points.size()*sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, num_bytes, &points[0]); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, points.size(), &points[0]); glEnableClientState(GL_INDEX_ARRAY); glIndexPointer(GL_FLOAT, faces.size(), faces.data()); glEnableVertexAttribArray(0); glDrawElements(GL_QUADS, points.size(), GL_UNSIGNED_INT, points.data()); glDrawElements(GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data());

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • ArchBeat Link-o-Rama for October 23, 2013

    - by OTN ArchBeat
    Virtual Dev Day: Oracle ADF Development - Web, Mobile, and Beyond This free virtual event includes technical sessions that range from introductory to deep dive, covering Oracle ADF and Oracle ADF Mobile. Multiple tracks cover every interest and every level and include live online Q&A for answers to your technical questions. Register now! Americas: Tuesday, November 19, 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT APAC: Thursday, November 21, 10am–1:30pm IST (India) / 12:30pm–4pm SGT (Singapore) / 3:30pm–7pm AESDT EMEA: Tuesday, November 26, 9am-1pm GMT / 1pm-5pm GST/ 2:30pm-6:30pm IST A Roadmap for SOA Development and Delivery | Mark Nelson Do you know the way to S-O-A? Mark Nelson does. His latest blog post, part of an ongoing series, will help to keep you from getting lost along the way. Updated ODI Statement of Direction | Robert Schweighardt Heads up Oracle Data Integrator fans! A new product statement of direction document is available, offering "an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB)." Java-Powered Robot Named NAO Wows Crowds | Tori Wieldt Java community manager Tori Wieldt interviews a robot and human. Nordic OTN Tour 2013 | Lonneke Dikmans Oracle ACE Director Lonneke Dikmans checks in from the Stockholm leg of the Nordic OTN Tour for 2013, sponsored by the Danish Oracle User Group and featuring fellow ACE Directors Tim Hall and Sten Vesterli, plus local speakers at various stops. Lonneke's post include the slides from three of the presentations. Thought for the Day "Some people approach every problem with an open mouth." — Adlai E. Stevenson23rd Vice President of the United States (October 23, 1835 – June 14, 1914) Source: brainyquote.com

    Read the article

  • Clouds Aroud the World

    - by user12608550
    At the NIST Cloud Computing Workshop this week; representatives from Canada, China, and Japan presented on their cloud computing efforts. Some interesting points made: Canada: Building "Service Canada" cloud for all citizen services, but raised the issue of data location...cloud data must be within Canada border, so they will not focus on public clouds where they don't know or can't control data location. Japan: In response to the massive destruction of the Great East Japan Earthquake, Japan is building nation-wide cloud services to support disaster relief, data recovery, and support for rebuilding new communities. US Ambassador Philip Verveer discussed the need for international cooperation and standards development to enable interoperability of cloud services, keeping in mind cultural and political differences. Additionally, an industry panel reported on cloud standards development, including some actual interoperability testing at http://www.cloudplugfest.org. Much of the first two days of the workshop covered progress and action plans around the 10 High-Priority Requirements to Further USG Agency Cloud Computing Adoption. Thursday's sessions will cover the work of the various NIST Cloud Computing Working Groups on Reference Architecture and Taxonomy Standards Acceleration to Jumpstart the Adoption of Cloud Computing (SAJACC) Cloud Security Standards Roadmap Business Use Cases (see Working Groups of NIST Cloud Computing )

    Read the article

  • Example: Cross Cutting Concerns of an Application

    A little while ago I was given an opportunity to design and implement a new system that sent data via an HTTP Post method and then processed the results that were returned so that they could be inserted in to a database. My system had eight core concerns that it needed to fulfill. Eight Core Concerns Database Access Data Entities Worker Result Processing Process Flow Manager Email/Notification Error Handling Logging Of these eight, five were actually cross cutting concerns. 5 Cross Cutting Concerns Database Access Data Entities Email/Notification Error Handling Logging These five cross cutting concerns were determined after I created an aspect oriented model to help identity the system components that could be factored out into separate components.  These separated components would then be included in the system so that they could be used by various other components.  These five components allow all of the other components to access the database, store data, send notifications, handle errors, and log all system events.  Thus, these components are used to share unique aspects to the system via their implementation. The use of Aspect oriented architecture greatly helped me define what components I needed to create and what each of those components could do.  It also showed how all of the other aspects depended on each other so that each component did not have to re-implement code that was already created in the existing system.

    Read the article

  • 12.10 install overwrote my windows partition

    - by Niall C
    Recently decided to switch back to Ubuntu. I have a 3TB drive which was running win7. I had 3 partitions. c: for windows d: data e: data Have installed ubuntu before so 'thought' I knew what I was doing. I using netbootin I installed from a usb stick. I didn't choose the default options but I didn't choose the 'manual install' either. I can't remember what option I took but I figured at some stage it would tell me how it was going to partition the disk and at that stage I would see if it had recognised the NTFS partitions and I would be able to abort if it didn't. Unfortunately, it didn't and just went ahead and installed Ubuntu and made up it's own mind on how it was going to partition the disk. Usual story, the two NTFS data partitions weren't backed up. Is there anything I can do to retrieve the ntfs data? I'm currently trying out testdisk and I know I can use photodisk to retrieve certain file types but all the filenames will be lost and it's going to take a hell of a lot of time to rename them all. Any help or assistance would be more than gratefully accepted. Thanks in advance, Niall

    Read the article

  • How do I write to an outer truecrypt volume when the inner volume protection prevents writng?

    - by con-f-use
    In a nutshell After some time using the outer volume of a hidden volume in Truecrypt I cannot write to the outer volume anymore. The protection of the inner volume always kicks in before. How do I fix this? Details I'm using truecrypt's two layered encryption of a USB stick. The outer container carries my semi-sensitive stuff while the inner hidden values has a bit more valuable information. I use both, the inner and outer volume regularly and that is part of the problem. Truecrypt can mount the outer volume for writing while protecting the inner. Usually the inner volume, when not protected this way (or mounted read-only) would be indistinguishable from free space. That is of course part of the plausible deniability scheme of truecrypt. At the beginning, everything worked as expected. I could copy and delete data to the outer volume as I pleased. Now it seams that I have written and deleted enough data to have filled the outer volume once. Despite the write protection Ubuntu tries now to write to the continuous "free space" that is the inner volume. It does that although enough other free space is on the outer volume. But on this free space there used to be data so its fragmented and the file system write prefers continuous space. The write on the continuous free space of the outer volume of course fails (with the error message in the picture above) as Truecrypt's inner-volume-protection kicks in. The Question I know this is expected behaviour, but is there a better way to write to the outer volume that does not attempt to write to the hidden free space at the end? The whole question could be more generally rephrased to: How do I control, where on a partition data is written in Ubuntu?

    Read the article

  • MVC pattern synchronisation

    - by Hariprasad
    I am facing a problem in synchronizing my model and view threads I have a view which is table. In it, user can select a few rows. I update the view as soon as the user clicks on any row since I don't want the UI to be slow. This updating is done by a logic which runs in the controller thread below. At the same time, the controller will update the model data too, which takes place in a different thread. i.e., controller puts the query in a queue, which is then executed by the model thread - which is a single-threaded interface. As soon as the query executes, controller will get a signal. Now, In order to keep the view and model synchronized, I will update the view again based on the return value of the query (the data returned by model) - even though I updated the view already for that user action. But, I am facing issues because, its taking a lot of time for the model to return the result, by that time user would have performed multiple clicks. So, as a result of updating the view again based on the information from model, the view sometimes goes back to the state in which the previous clicks were made (Suppose user clicks thrice on different rows. I update the view as soon as the click happens. Also, I update the view when I get data back from the model - which is supposed to be same as the already updated state of the view. Now, when the user clicks third time, I get data for the first click from model. As a result, view goes back to a state which is generated by the first click) Is there any way to handle such a synchronization issue?

    Read the article

  • Turn a Kindle into a Weather Display Station

    - by Jason Fitzpatrick
    The e-ink display, network connectivity, and low-power consumption of Kindle ebook readers make them a perfect candidate for an infrequently refreshed high-visibility display–like a weather display. Read on to see how to hack a Kindle to serve up the local weather. Tinker and hardware hacker Matt Petroff hacked his Kindle to accept input from a web server and then, graciously and in the spirit of geeky projects everywhere, shared his source code. He explains the heart of the project: The server side of the system uses shell and Python scripts to convert weather forecast data into an image for the Kindle. The scripts first download and parse forecast data from NOAA via the National Digital Forecast Database XML/SOAP Service. After parsing the data, the data then needs to be converted into an image. This is accomplished by preprocessing a specially crafted SVG file to insert temperatures, forecast symbols, and days of the week. This SVG is then rendered as a PNG using rsvg-convert and converted to a grayscale, no transparency color space as required by the Kindle using pngcrush. Finally, it is copied to a public location on the web server. The Kindle is set to refresh twice a day (you could easily tweak the scripts for a more frequent refresh) and displays the forecast as seen in the photo above–with crisp and easy to read text and icons. Hit up the link below for more information and the project’s source code. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

< Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >