Search Results

Search found 20582 results on 824 pages for 'double array'.

Page 592/824 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • Reflection and the params Keyword

    - by Robert May
    I’ve had to look this up a couple of times, and there’s not much out there, so I end up guessing the same answer over and over. When using MethodBase.GetParameters() to get an array of ParameterInfo object, I often want to get a count of the number of parameters that are out, optional, params, etc.  For out and optional, you can simply check ParameterInfo.IsOut or ParameterInfo.IsOptional or any number of other “Attributes”. However, for params, there isn’t a property on ParameterInfo.  Instead, you have to do this: info.GetCustomAttributes(typeof(ParamArrayAttribute), true) This will get you a set of all of the attributes that are the ParamArrayAttribute, which you can then turn into a linq statement that looks like this: methodParameters.Count(info => info.GetCustomAttributes(typeof(ParamArrayAttribute), true).Count() > 0); Which, assuming that methodParameters is the result of MethodBase.GetParameters, will give you a count of the number of parameters that have the params keyword.  Of course, there can be only one, but who’s counting! Now, hopefully, the next time I try to look this up, my own blog will get the values. Technorati Tags: Reflection

    Read the article

  • Growing Talent

    The subtitle of Daniel Coyles intriguing book The Talent Code is Greatness Isnt Born. Its Grown. Heres How. The Talent Code proceeds to layout a theory of how expertise can be cultivated through specific practices that encourage the growth of myelin in the brain. Myelin is a material that is produced and wraps around heavily used circuits in the brain, making them more efficient. Coyle uses an analogy that geeks will appreciate. When a circuit in the brain is used a lot (i.e. a specific action is repeated), the myelin insulates that circuit, increasing its bandwidth from telephone over copper to high speed broadband. This leads to the funny phenomenon of effortless expertise. Although highly skilled, the best players make it look easy. Coyle provides some biological backing for the long held theory that it takes 10,000 hours of practice to achieve mastery over a given subject. 10,000 hours or 10 years, as in, Teach Yourself Programming in Ten Years and others. However, it is not just that more hours equals more mastery. The other factors that Coyle identifies includes deep practice, practice which crucially involves drills that are challenging without being impossible. Another way to put it is that every day you spend doing only tasks you find monotonous and automatic, you are literally stagnating your brains development! Perhaps Coyles subtitle, needs one more phrase, Greatness Isnt Born. Its Grown. Heres How. And oh yeah, its not easy. Challenging yourself, continuing to persist in the face of repeated failures, practicing every day is not easy. As consultants, we sell our expertise, so it makes sense that we plan projects so that people can play to their strengths. At the same time, an important part of our culture is constant improvement, challenging yourself to be better. And the balancing contest ensues. I just finished working on a proof of concept (POC) we did for a project we are bidding on. Completely time boxed, so our team naturally split responsibilities amongst ourselves according to who was better at what. I must have been pretty bad at the other components, as I found myself working on the user interface, not my usual strength. The POC had a website frontend, and one thing I do know is HTML. After starting out in pure ASP.NET WebForms, I got frustrated as time was ticking, I knew what I wanted in HTML, but I couldnt coax the right output out of the ASP.NET controls. I needed two or three elements on the screen that were identical in layout, with different content. With a backup plan in  of writing the HTML into the response by hand, I decided to challenge myself a bit and see what I could do in an hour or two using the Microsoft submitted jQuery micro-templating JavaScript library. This risk paid off. I was able to quickly get the user interface up and running, responsive to the JSON data we were working with. I felt energized by the double win of getting the POC ready and learning something new. Opportunities  specifically like this POC dont come around often, but the takeaway is that while it wont be easy, there are ways to generate your own opportunities to grow towards greatness.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft ADO.NET 4 Step by Step

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Many years ago, I wrote Pro ADO.NET 2.0. I still think that in the plethora of new data access technologies that have come out since, the basic core ADO.NET fundamentals are still every developer must know, and sadly they do not know. So for some crazy reason, I still see every project make the same data access related mistakes over and over again. Anyway, the challenge is that on top of the core ADO.NET fundamentals, there is a vast array of other new technologies you must learn. The important of which is Entity Framework. So, I was asked to, and I was pleased to be the technical reviewer for Microsoft ADO.NET 4, Step by Step, by Tim Patrick. This book introduces the reader not just to the basic ADO.NET principles, but also Entity Framework, LINQ to SQL, and WCF Data Services. So what you may ask is a SharePoint guy like me doing with such interest in ADO.NET land? Well, that’s what the other side says, what is a hardcore data access sorta guy doing in SharePoint land? :). I have authored/co-authored 4 books so far on data access (1,2,3,4), and one on pure SharePoint, and now one on SharePoint 2010 BI. These are very intertwined topics. And LINQ to SQL and LINQ to SharePoint are almost copy paste of each other. WCF Data services are literally the same in both. And many Entity Framework concepts also apply within SharePoint. So there, I did these both for “interest” reasons. Comment on the article ....

    Read the article

  • Dirt Cheap Bi-Directional Antenna Wirelessly Extends Your LAN

    - by Jason Fitzpatrick
    If you’re looking for an effective way to link remote LANs without the hassle of laying cable, this DIY bi-directional antenna is a quick (and cheap) method for bringing internet access to outbuildings and other locations. Tinker Danilo Larizza needed to share internet access between apartments that are relatively close together but not hardwired–ruling out simply sharing the access via existing LAN infrastructure. His solution combines a simple scrap wire antenna array mounted inside a plastic food bin (seen here with the cover removed to show the antenna) and some coaxial cable to link the antenna to two routers. Our favorite part about his build is that he constructed the pair to establish if the antenna setup would even work in his location and intended to buy commercial antennas if it did; his Tupperware models worked so well, however, they’re now the permanent solution. Hit up the link below for more information about the project. 2.4 Ghz Directive Biquad Antenna [via Hack A Day] How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Analyzing Memory Usage: Java vs C++ Negligible?

    - by Anthony
    How does the memory usage of an integer object written in Java compare\contrast with the memory usage of a integer object written in C++? Is the difference negligible? No difference? A big difference? I'm guessing it's the same because an int is an int regardless of the language (?) The reason why I asked this is because I was reading about the importance of knowing when a program's memory requirements will prevent the programmer from solving a given problem. What fascinated me is the amount of memory required for creating a single Java object. Take for example, an integer object. Correct me if I'm wrong but a Java integer object requires 24 bytes of memory: 4 bytes for its int instance variable 16 bytes of overhead (reference to the object's class, garbage collection info & synchronization info) 4 bytes of padding As another example, a Java array (which is implemented as an object) requires 48+bytes: 24 bytes of header info 16 bytes of object overhead 4 bytes for length 4 bytes for padding plus the memory needed to store the values How do these memory usages compare with the same code written in C++? I used to be oblivious about the memory usage of the C++ and Java programs I wrote, but now that I'm beginning to learn about algorithms, I'm having a greater appreciation for the computer's resources.

    Read the article

  • Remote RAID Control in ESXi on a Dell PowerEdge 2950 Using OpenManage

    - by yoyomommy
    I was wondering how one can add a drive into an existing RAID array while ESXi is still running. I have read that you are able to use Dell OpenManage to do this. I have installed OMSA 7.0 on the VMWare ESXi host (5.0 and fully updated) and I've installed OpenManage Essentials on a Windows Server 2008 R2 guest. The issue that I'm having is that OpenManage is unable to see my RAID controller. I have seen videos and photos as parts of guides on how to do this online, so I would assume that the functionality exists and I just have it set up wrong.

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • The best way to have a pointer to several methods - critique requested

    - by user827992
    I'm starting with a short introduction of what i know from the C language: a pointer is a type that stores an adress or a NULL the * operator reads the left value of the variable on its right and use this value as address and reads the value of the variable at that address the & operator generate a pointer to the variable on its right so i was thinking that in C++ the pointers can work this way too, but i was wrong, to generate a pointer to a static method i have to do this: #include <iostream> class Foo{ public: static void dummy(void){ std::cout << "I'm dummy" << std::endl; }; }; int main(){ void (*p)(); p = Foo::dummy; // step 1 p(); p = &(Foo::dummy); // step 2 p(); p = Foo; // step 3 p->dummy(); return(0); } now i have several questions: why step 1 works why step 2 works too, looks like a "pointer to pointer" for p to me, very different from step 1 why step 3 is the only one that doesn't work and is the only one that makes some sort of sense to me, honestly how can i write an array of pointers or a pointer to pointers structure to store methods ( static or non-static from real objects ) what is the best syntax and coding style for generating a pointer to a method?

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

  • Getting Wrong Answer in range maximum query [on hold]

    - by user3186829
    I've just learnt range minimum and maximum queries using segment trees.But when I implemented it on my own I'm getting wrong answer.Logically I don't find any mistake in my code but if any one can point it out then I would be really thankful. Code in C++: #include<bits/stdc++.h> using namespace std; #define LL long long #define mp make_pair #define pb push_back #define gc getchar_unlocked #define pc putchar_unlocked #define LD long double #define MAXN 19999999 #define max(a,b) ((a)>(b)?(a):(b)) LL P[MAXN+15]; LL ST[2*MAXN+25]; long N,M,i,A,B,K; void build(long id,long L,long R) { long M=(L+R)>>1L; long LCT=id<<1L; long RCT=LCT+1L; if(L==R) { ST[id]=P[L]; return; } build(LCT,L,M); build(RCT,M+1,R); } //Range Update of segment tree void updateST(long id,long L,long R,long Q1,long Q2,long val) { long M=(L+R)>>1L; long LCT=id<<1L; long RCT=LCT+1L; if(L>Q2||R<Q1) { return; } if(L==Q1&&R==Q2) { ST[id]+=val; return; } if(Q2<=M) { updateST(LCT,L,M,Q1,Q2,val); } else if(Q1>M) { updateST(RCT,M+1,R,Q1,Q2,val); } else { updateST(LCT,L,M,Q1,M,val); updateST(RCT,M+1,R,M+1,Q2,val); } } //Query for finding maximum element in a given range[Q1,Q2] and 1<=Q1,Q2<=N LL query2(long id,long L,long R,long Q1,long Q2) { long M=(L+R)>>1; long LCT=id<<1; long RCT=LCT+1; if(L>Q2||R<Q1) { return 0; } if(L==Q1&&R==Q2) { return ST[id]; } if(Q2<=M) { return query2(LCT,L,M,Q1,Q2); } else if(Q1>M) { return query2(RCT,M+1,R,Q1,Q2); } else { LL G=query2(LCT,L,M,Q1,M); LL H=query2(RCT,M+1,R,M+1,Q2); LL RES=max(G,H); return RES; } } int main() { scanf("%ld %ld",&N,&M); build(1,1,N); for(i=0;i<M;i++) { scanf("%ld %ld %ld",&A,&B,&K); updateST(1,1,N,A,B,K); } //Finding maximum element in range[1,N]] cout<<query2(1,1,N,1,N); return 0; }

    Read the article

  • Top 25 security issues for developers of web sites

    - by BizTalk Visionary
    Sourced from: CWE This is a brief listing of the Top 25 items, using the general ranking. NOTE: 16 other weaknesses were considered for inclusion in the Top 25, but their general scores were not high enough. They are listed in the On the Cusp focus profile. Rank Score ID Name [1] 346 CWE-79 Failure to Preserve Web Page Structure ('Cross-site Scripting') [2] 330 CWE-89 Improper Sanitization of Special Elements used in an SQL Command ('SQL Injection') [3] 273 CWE-120 Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') [4] 261 CWE-352 Cross-Site Request Forgery (CSRF) [5] 219 CWE-285 Improper Access Control (Authorization) [6] 202 CWE-807 Reliance on Untrusted Inputs in a Security Decision [7] 197 CWE-22 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') [8] 194 CWE-434 Unrestricted Upload of File with Dangerous Type [9] 188 CWE-78 Improper Sanitization of Special Elements used in an OS Command ('OS Command Injection') [10] 188 CWE-311 Missing Encryption of Sensitive Data [11] 176 CWE-798 Use of Hard-coded Credentials [12] 158 CWE-805 Buffer Access with Incorrect Length Value [13] 157 CWE-98 Improper Control of Filename for Include/Require Statement in PHP Program ('PHP File Inclusion') [14] 156 CWE-129 Improper Validation of Array Index [15] 155 CWE-754 Improper Check for Unusual or Exceptional Conditions [16] 154 CWE-209 Information Exposure Through an Error Message [17] 154 CWE-190 Integer Overflow or Wraparound [18] 153 CWE-131 Incorrect Calculation of Buffer Size [19] 147 CWE-306 Missing Authentication for Critical Function [20] 146 CWE-494 Download of Code Without Integrity Check [21] 145 CWE-732 Incorrect Permission Assignment for Critical Resource [22] 145 CWE-770 Allocation of Resources Without Limits or Throttling [23] 142 CWE-601 URL Redirection to Untrusted Site ('Open Redirect') [24] 141 CWE-327 Use of a Broken or Risky Cryptographic Algorithm [25] 138 CWE-362 Race Condition Cross-site scripting and SQL injection are the 1-2 punch of security weaknesses in 2010. Even when a software package doesn't primarily run on the web, there's a good chance that it has a web-based management interface or HTML-based output formats that allow cross-site scripting. For data-rich software applications, SQL injection is the means to steal the keys to the kingdom. The classic buffer overflow comes in third, while more complex buffer overflow variants are sprinkled in the rest of the Top 25.

    Read the article

  • Blink-Data vs Instinct?

    - by Samantha.Y. Ma
    In his landmark bestseller Blink, well-known author and journalist Malcolm Gladwell explores how human beings everyday make seemingly instantaneous choices --in the blink of an eye--and how we “think without thinking.”  These situations actually aren’t as simple as they seem, he postulates; and throughout the book, Gladwell seeks answers to questions such as: 1.    What makes some people good at thinking on their feet and making quick spontaneous decisions?2.    Why do some people follow their instincts and win, while others consistently seem to stumble into error?3.    Why are some of the best decisions often those that are difficult to explain to others?In Blink, Gladwell introduces us to the psychologist who has learned to predict whether a marriage will last, based on a few minutes of observing a couple; the tennis coach who knows when a player will double-fault before the racket even makes contact with the ball; the antiquities experts who recognize a fake at a glance. Ultimately, Blink reveals that great decision makers aren't those who spend the most time deliberating or analyzing information, but those who focus on key factors among an overwhelming number of variables-- i.e., those who have perfected the art of "thin-slicing.” In Data vs. Instinct: Perfecting Global Sales Performance, a new report sponsored by Oracle, the Economist Intelligence Unit (EIU) explores the roles data and instinct play in decision-making by sales managers and discusses how sales executives can increase sales performance through more effective  territory planning and incentive/compensation strategies.If you are a sales executive, ask yourself this:  “Do you rely on knowledge (data) when you plan out your sales strategy?  If you rely on data, how do you ensure that your data sources are reliable, up-to-date, and complete?  With the emergence of social media and the proliferation of both structured and unstructured data, how do you know that you are applying your information/data correctly and in-context?  Three key findings in the report are:•    Six out of ten executives say they rely more on data than instinct to drive decisions. •    Nearly one half (48 percent) of incentive compensation plans do not achieve the desired results. •    Senior sales executives rely more on current and historical data than on forecast data. Strikingly similar to what Gladwell concludes in Blink, the report’s authors succinctly sum up their findings: "The best outcome is a combination of timely information, insightful predictions, and support data."Applying this insight is crucial to creating a sound sales plan that drives alignment and results.  In the area of sales performance management, “territory programs and incentive compensation continue to present particularly complex challenges in an increasingly globalized market," say the report’s authors. "It behooves companies to get a better handle on translating that data into actionable and effective plans." To help solve this challenge, CRM Oracle Fusion integrates forecasting, quotas, compensation, and territories into a single system.   For example, Oracle Fusion CRM provides a natural integration between territories, which define the sales targets (e.g., collection of accounts) for the sales force, and quotas, which quantify the sales targets. In fact, territory hierarchy is a core analytic dimension to slice and dice sales results, using sales analytics and alerts to help you identify where problems are occurring. This makes territoriesStart tapping into both data and instinct effectively today with Oracle Fusion CRM.   Here is a short video to provide you with a snapshot of how it can help you optimize your sales performance.  

    Read the article

  • Solution for storing sata drives outside of case

    - by Jeffrey Kevin Pry
    I have a system that has 8 sata disks in a software raid 5 array using mdadm. My issue is that I want to move the drives from inside of the computer case in order to cool more efficiently. I have looked all over the web and only seem to find enclosures that hide the drives connectors behind an estata port or some other internal raid controller. Basically what I want is an enclosure or equivalent that I can run independent sata cables to and either power as well, or have it have its own power supply. I have the sata ports on the motherboard available and don't want to limit io by using one port with a multiplier or the like. One final caveat, I am a college student on a budget and don't have a fortune to spend on such an enclosure. Thanks in advance for your help and advice.

    Read the article

  • iSCSI RAID1 on two servers, fail scenario

    - by Franz Kafka
    Hallo, a simple question: Image I have two servers, each server has two disks in RAID1. Now I merge the two arrays with iSCSI to one RAID1 disk. Two questions: Can I do the merging of the 4 disks in one go? I can't image how. First I will have to install the os, and then the raid controller is already set up to RAID1. If a whole server fails the other server would continue working without any problems? Does iSCSI notice that the other server is missing and treet this as if the two disks were broken? When the server comes back online the data is resynced, as if I installed new disks into a array? Can I image that this way? Thanks alot.

    Read the article

  • links for 2011-01-03

    - by Bob Rhubart
    Using Solaris zfs + iscsi targets with Oracle VM (Wim Coekaerts Blog) "I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a Solaris box, that I use for Oracle VDI, available." - Wim Coekaerts (tags: oracle otn solaris oraclevm virtualization) DanT's GridBlog: Oracle Grid Engine: Changes for a Bright Future at Oracle "Today, we are entering a new chapter in Oracle Grid Engine’s life. Oracle has been working with key members of the open source community to pass on the torch for maintaining the open source code base to the Open Grid Scheduler project hosted on SourceForge." - Dan Templeton (tags: oracle gridengine) Oracle Fusion Middleware Security: How do I secure my services? "I've been up early for a couple of days talking to a customer about how they should secure their services,' says Chris Johnson. "I'm going to tell you what I told them." (tags: oracle fusionmiddleware security) OldSpice your Innovation - Dangers of Status Quo E2.0 | Enterprise 2.0 Blogs "If organizations only leverage E2.0 technologies in a 'me too' fashion, they are essentially using a bucket to bail water from a leaking ship." - John Brunswick (tags: oracle enteprise2.0) The Aquarium: GlassFish in 2011 - What to expect A look into the Glassfish crystal ball... (tags: oracle glassfish) Andrejus Baranovskis's Blog: Fusion Middleware 11g Security - Retrieve Security Groups from ADF 11g Oracle ACE Director Andrejus Baranovskis shows you what to do when you need to access security information directly from an ADF 11g application. (tags: oracle otn fusionmiddleware security adf) @eelzinga: Book review : Oracle SOA Suite 11g R1 Developer's Guide "What I really liked in this book...was the compare/description of the Oracle Service Bus. The authors did a great job on describing functionality of components existing in the SOA Suite and how to model them in your own process." - Oracle ACE Eric ElZinga (tags: oracle oracleace soa bookreview soasuite)

    Read the article

  • 284 GiB of data, 217.4 GiB of space

    - by Malfist
    I want to reinstall my OS, but I don't have the hard drive space to backup any more (I have a RAID 1 array, so I haven't done it for a while). In my /home I have 284.8 GiB of data, and I have a spare 250 GB (or 217.4 GiB) hard drive that I've been using for backup. What type of compression algorithm (if any) is capable of this type of compression? I don't care about the time, I have a quad core though, so something that utilizes all 4 cores would be great. I have tried 7zip with no success. Ran on one core for two days and failed because of lack of space. Any ideas?

    Read the article

  • Restoring GRUB2 on Software RAID 0 using LiveCD after Windows 7 wiped it

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by Mike.Hallett(at)Oracle-BI&EPM
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialised partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue:  Oracles London CITY Moorgate Offices Places are limited, Register from this Link {see Register button at bottom right of page}. Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.     Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day)   For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected].

    Read the article

  • Collision within a poly

    - by G1i1ch
    For an html5 engine I'm making, for speed I'm using a path poly. I'm having trouble trying to find ways to get collision with the walls of the poly. To make it simple I just have a vector for the object and an array of vectors for the poly. I'm using Cartesian vectors and they're 2d. Say poly = [[550,0],[169,523],[-444,323],[-444,-323],[169,-523]], it's just a pentagon I generated. The object that will collide is object, object.pos is it's position and object.vel is it's velocity. They're both 2d vectors too. I've had some success to get it to find a collision, but it's just black box code I ripped from a c++ example. It's very obscure inside and all it does though is return true/false and doesn't return what vertices are collided or collision point, I'd really like to be able to understand this and make my own so I can have more meaningful collision. I'll tackle that later though. Again the question is just how does one find a collision to walls of a poly given you know the poly vertices and the object's position + velocity? If more info is needed please let me know. And if all anyone can do is point me to the right direction that's great.

    Read the article

  • Erlang node acts like it connects, but doesn't [migrated]

    - by Malfist
    I'm trying to setup a distributed network of nodes across a few firewalls and it's not going so well. My application is structured like this: there is a central server that always running a node ([email protected]) and my co-worker's laptops connect to it on startup. This works if we're all in the office, but if someone is at home, they can connect to the masternode, but they fail to connect to the other nodes in the swarm. I.E., erlang fails to gossip correctly. To correct this, I've change epmd's port number and changed the inet_dist_listen ports to a known open port (1755 and 7070 respectively). However, something fishy is going on. I can run net_adm:world() and it reports that it connects to master node, but when I run nodes() I get an empty array. Same with net_adm:ping('[email protected]'). See: Eshell V5.9 (abort with ^G) ([email protected])1> net_adm:world(). ['[email protected]'] ([email protected])2> nodes(). [] ([email protected])3> net_adm:ping('[email protected]'). pong ([email protected])4> nodes(). [] ([email protected])5> What's going on, and how can I fix it?

    Read the article

  • Back button after doing posts on the same page

    - by user441521
    I have 3 pages to my site. The 1st page allows you to select a bunch of options. Those options get sent to the 2nd page to be displayed with some data about those options. From here I can click on a link to get to page 3 on 1 of the options. On page 3 I can create a new/edit/delete all on the same page where reloads come back to page 3. I want a "back" button on page 3 to go back to page 2, but retain the options it had from the original page 1 request. Page 1 has a bunch of check boxes which are passed to page 2 as arrays to the controller. My thought is that I have to pass these arrays (I converted them to lists) to page 3 (even thought page 3 directly doesn't need them) so that page 3 can use them in the back link to it can recreate the values page 1 sent to page 2 originally. I'm using asp.net MVC and when I pass the converted array it seems to convert it to the type instead of actually showing the values: "types=System.Collections.Generic.List" (where types is a List. Is this what is needed or are there other options to getting a "back" button in this case to go back to page 2. It's sort of a pain to pass information to page 3 that isn't really relevant to page 3 except the back button.

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Easy Made Easier - Networking

    - by dragonfly
        In my last post, I highlighted the feature of the Appliance Manager Configurator to auto-fill some fields based on previous field values, including host names based on System Name and sequential IP addresses from the first IP address entered. This can make configuration a little faster and a little less subject to data entry errors, particularly if you are doing the configuration on the Oracle Database Appliance itself.     The Oracle Database Appliance Appliance Manager Configurator is available for download here. But why would you download it, if it comes pre-installed on the Oracle Database Appliance? A common reason for customers interested in this new Engineered System is to get a good idea of how easy it is to configure. Beyond that, you can save the resulting configuration as a file, and use it on an Oracle Database Appliance. This allows you to verify the data entered in advance, and in the comfort of your office. In addition, the topic of this post is another strong reason to download and use the Appliance Manager Configurator prior to deploying your Oracle Database Appliance.     The most common source of hiccups in deploying an Oracle Database Appliance, based on my experiences with a variety of customers, involves the network configuration. It is during Step 11, when network validation occurs, that these come to light, which is almost half way through the 24 total steps, and can be frustrating, whether it was a typo, DNS mis-configuration or IP address already in use. This is why I recommend as a best practice taking advantage of the Appliance Manager Configurator prior to deploying an Oracle Database Appliance.     Why? Not only do you get the benefit of being able to double check your entries before you even start on the Oracle Database Appliance, you can also take advantage of the Network Validation step. This is the final step before you review all the data and can save it to a text file. It can be skipped, if you aren't ready or are not connected to the network that the Oracle Database Appliance will be on. My recommendation, though, is to run the Appliance Manager Configurator on your laptop, enter the data or re-load a previously saved file of the data, and then connect to the network that the Oracle Database Appliance will be on. Now run the Network Validation. It will check to make sure that the host names you entered are in DNS and do resolve to the IP addresses you specifiied. It will also ping the IP Addresses you specified, so that you can verify that no other machine is already using them (yes, that has happened at customer sites).     After you have completed the validation, as seen in the screen shot below, you can review the results and move on to saving your settings to a file for use on your Oracle Database Appliance, or if there are errors, you can use the Back button to return to the appropriate screen and correct the data. Once you are satisfied with the Network Validation, just check the Skip/Ignore Network Validation checkbox at the top of the screen, then click Next. Is the Network Validation in the Appliance Manager Configurator required? No, but it can save you time later. I should also note that the Network Validation screen is not part of the Appliance Manager Configurator that currently ships on the Oracle Database Appliance, so this is the easiest way to verify your network configuration.     I hope you are finding this series of posts useful. My next post will cover some aspects of the windowing environment that gets run by the 'startx' command on the Oracle Database Appliance, since this is needed to run the Appliance Manager Configurator via a direct connected monitor, keyboard and mouse, or via the ILOM. If it's been a while since you've used an OpenWindows environment, you'll want to check it out.

    Read the article

  • How to perform fresh linux install while preserving software raid and user accounts

    - by slayton
    I have a system with two software raid arrays. The OS is Ubuntu 9.04 and is no longer receiving updates. I'd like to update the system to 12.04 rather than trying to do the automatic update from 9.04-> 9.10-> ... -> 12.04. My main drive has 2 partitions that are mounted at / and /home. Is it possible to do a fresh install of linux to the partition where / is mounted while preserving user accounts and preferences (such as passwords, home dir locations, etc...)? Additionally what do I need to do to keep my software raid array intact following the OS re-install?

    Read the article

  • libgdx actors and instant actions

    - by vaati
    I'm having trouble with actors and actions. I have a list of actors, they all have either no action, or 1 sequence action This sequence action has either : a couple of actions (some are instant, some have duration 0) a couple of actions followed by a parallel action. My problem is the following: some of the instant actions are used to set the position and the alpha of the actor. So when one of the action is "move to x,y and set alpha to 0" the actor is visible for one frame at position 0,0 , move instantly to x,y for the next frame, and then disappears. Though this behaviours is to be expected, I want to avoid it. How can I achieve that? I tried to intercept the actions before I put actors in the stage but I need the stage width/height for some actions. So something like : Action actionSequence = actor.getActions().get(0); Array<Action> actions = ((SequenceAction) actionSequence).getActions(); for(Action act : actions){ if(act.act(0)) System.out.println("action " + act.toString() + " successfully run"); else System.out.println("action " + act.toString() + " wasn't instant"); } won't work. It gets even more complicated when an actor can also have a repeat action in stead of the sequence action (because you have to only run the actions that have duration 0 once without repeat, and then start the repeat). Any help is appreciated.

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >