Search Results

Search found 5985 results on 240 pages for 'brian john'.

Page 31/240 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • I've got my Master's in Software Engineering... Now what? [closed]

    - by Brian Driscoll
    Recently I completed a Master of Science in Software Engineering from Drexel University (Philadelphia, PA, US), because I wanted to have some formal education in software (my undergrad is in Math Ed) and also because I wanted to be able to advance my career beyond just programming. Don't get me wrong; I love to code. I spend a lot of my spare time coding. However, for me writing code is just a means to an end: what I REALLY love is designing software. Not visual design, mind you, but the architecture of the system. So, ideally I'd like to try to get a job doing software architecture. The problem is that I have no real experience in it besides my graduate course work. So, what should I do to make my "bones" in software architecture? UPDATE Just so it's clear, I have over 5 years of work experience in software development and an MCTS cert in addition to my education, so I'm not looking for the usual "I'm fresh out of school, what should I do?" advice.

    Read the article

  • My application had a WindowsIdentity crisis

    - by Brian Donahue
    The project I have been working on this week to test computer environments needs to do various actions as a user other than the one running the application. For instance, it looks up an installed Windows Service, finds out who the startup user is, and tries to connect to a database as that Windows user. Later on, it will need to access a file in the context of the currently logged-in user. With ASP .NET, this is super-easy: just go into Web.Config and set up the "identity impersonate" node, which can either impersonate a named user or the one who had logged into the website if authentication was enabled. With Windows applications, this is not so straightforward. There may be something I am overlooking, but the limitation seems to be that you can only change the security context on the current thread: any threads spawned by the impersonated thread also inherit the impersonated credentials. Impersonation is easy enough to do, once you figure out how. Here is my code for impersonating a user on the current thread:         using System;         using System.ComponentModel;         using System.Runtime.InteropServices;         using System.Security.Principal;         public class ImpersonateUser         {                 IntPtr userHandle;   [DllImport("advapi32.dll", SetLastError = true)]                 static extern bool LogonUser(                         string lpszUsername,                         string lpszDomain,                         string lpszPassword,                         LogonType dwLogonType,                         LogonProvider dwLogonProvider,                         out IntPtr phToken                         );                     [DllImport("kernel32.dll", SetLastError = true)]                 static extern bool CloseHandle(IntPtr hHandle);                     enum LogonType : int                 {                         Interactive = 2,                         Network = 3,                         Batch = 4,                         Service = 5,                         NetworkCleartext = 8,                         NewCredentials = 9,                 }                     enum LogonProvider : int                 {                         Default = 0,                 }                 public static WindowsImpersonationContext Impersonate(string user, string domain, string password)                 {   IntPtr userHandle = IntPtr.Zero;                         bool loggedOn = LogonUser(                                 user,                                 domain,                                 password,                                 LogonType.Interactive,                                 LogonProvider.Default,                                 out userHandle);                               if (!loggedOn)                         throw new Win32Exception(Marshal.GetLastWin32Error());                           WindowsIdentity identity = new WindowsIdentity(userHandle);                         WindowsPrincipal principal = new WindowsPrincipal(identity);                         System.Threading.Thread.CurrentPrincipal = principal;                         return identity.Impersonate();   }         }   /* Call impersonation */ ImpersonateUser.Impersonate("UserName","DomainName","Password"); /* When you want to go back to the original user */ WindowsIdentity.Impersonate(IntPtr.Zero); When you want to stop impersonating, you can call Impersonate() again with a null pointer. This will allow you to simulate a variety of different Windows users from the same applicaiton.

    Read the article

  • Layers - Logical seperation vs physical

    - by P.Brian.Mackey
    Some programmers recommend logical seperation of layers over physical. For example, given a DL, this means we create a DL namespace not a DL assembly. Benefits include: faster compilation time simpler deployment Faster startup time for your program Less assemblies to reference Im on a small team of 5 devs. We have over 50 assemblies to maintain. IMO this ratio is far from ideal. I prefer an extreme programming approach. Where if 100 assemblies are easier to maintain than 10,000...then 1 assembly must be easier than 100. Given technical limits, we should strive for < 5 assemblies. New assemblies are created out of technical need not layer requirements. Developers are worried for a few reasons. A. People like to work in their own environment so they dont step on eachothers toes. B. Microsoft tends to create new assemblies. E.G. Asp.net has its own DLL, so does winforms. Etc. C. Devs view this drive for a common assembly as a threat. Some team members Have a tendency to change the common layer without regard for how it will impact dependencies. My personal view: I view A. as silos, aka cowboy programming and suggest we implement branching to create isolation. C. First, that is a human problem and we shouldnt create technical work arounds for human behavior. Second, my goal is not to put everything in common. Rather, I want partitions to be made in namespaces not assemblies. Having a shared assembly doesnt make everything common. I want the community to chime in and tell me if Ive gone off my rocker. Is a drive for a single assembly or my viewpoint illogical or otherwise a bad idea?

    Read the article

  • What is the standard for naming variables and why?

    - by P.Brian.Mackey
    I'm going through some training on objective-c. The trainer suggests setting single character parameter names. The .NET developer in me is crying. Is this truly the convention? Why? For example, @interface Square : NSObject { int size; } -(void)setSize: (int)s; I've seen developers using underscores int _size to declar variables (I think people call the variable declared in @interface ivar for some unknown reason). Personally, I prefer to use descriptive names. E.G. @interface Square : NSObject { int Size; } -(void)setSize: (int)size; C, like C# is case sensitive. So why don't we use the same convention as .NET?

    Read the article

  • Should I force users to update an application?

    - by Brian Green
    I'm writing an application for a medium sized company that will be used by about 90% of our employees and our clients. In planning for the future we decided to add functionality that will verify that the version of the program that is running is a version that we still support. Currently the application will forcequit if the version is not among our supported versions. Here is my concern. Hypothetically, in version 2.0.0.1 method "A" crashes and burns in glorious fashion and method "B" works just fine. We release 2.0.0.2 to fix method A and deprecate version 0.1. Now if someone is running 0.1 to use method B they will be forced to update to fix something that isn't an issue for them right now. My question is, will the time saved not troubleshooting old, unsupported versions outweigh the cost in usability?

    Read the article

  • Slides and Scripts from SharePoint Cincy 2014

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2014/06/06/slides-and-scripts-from-sharepoint-cincy-2014.aspx   I was pleased to present at SharePoint Cincy again for the third year.  Geoff and all the organizers do a great job.  My presentation this year was “PowerShell for Your SharePoint Tool Belt”.  Below are my slides and demo scripts.  Thanks for all who attended, I hope you found something that will be useful for you in your work.   Demo PowerShell Scripts   Slidedeck           -Frog Out

    Read the article

  • New Workstation &ndash; Lenovo W530 Core i7 32GB 256GB SSD Win8Pro

    - by Brian Lanham
    So I pretty-much have my new machine up and running full-time. I am still going to have to hit my old workstation for some things but am more-or-less working on my new machine.  It’s really fast. And Bret was right, I’m not so far using all the RAM. 16 would have been enough but as @CodeMonkeyJava “go big or go home”. Windows 8 is…interesting.  So far I still seem to do most of my work in the “Desktop”.  However, I like the Store concept and I like the Metro UX.  Live tiles are also nice.  I really like how I can switch between Desktop and Metro easily.  Overall I think Microsoft has done a great job of combining the needed experience for touch and mouse. My overall Windows 8 rating is 5.9 because of the video card. Otherwise I’m hitting 7.8.  The system boots from cold in about 11 seconds and performs complete shutdown in 4.7 seconds.  It wakes from sleep in less than 1 second. VS 2012 starts and restarts almost instantly.  In fact, I find myself staring at the start page without realizing it.  Build time doesn’t seem to be significantly increased but it is faster. I seem to already be reinvigorated for work with this new machine. I’m looking forward to the performance.

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • Windows 7 with Ubuntu Problems HP

    - by Brian
    I have a HP e9300z, Have Windows 7 on it currently. x64 bit. I wish to Dual Boot Ubuntu with Windows 7, when i do the windows setup , do not even get the options to Install Alongside with, Just install or Something else. Someone in another forum said something with the HP BIOS, can not find where to change anything. So what do i have to do so Ubuntu sees my HDD with Windows? Manually make partition? Only have 1 partition my C: Windows. Thanks.

    Read the article

  • Help with Meshes, and Shading

    - by Brian Diehr
    In a game I'm making in LibGdx, I wish to incorporate a ripple effect. I'm confused as to how I get access to the individual pixels on the screen, or any way to influence them (apart from what I can do with sprite batch). I have my suspicions that I have to do it through openGL, and it has something to do with apply a mesh? This brings me to my question, what exactly is a mesh? I've been working on my game for about half a year, and am doing great with the other aspects of the game, but I find this more advance stuff isn't as well documented. Thanks!

    Read the article

  • AngularJS dealing with large data sets (Strategy)

    - by Brian
    I am working on developing a personal temperature logging viewer based on my rasppi curl'ing data into my web server's api. Temperatures are taken every 2 seconds and I can have several temperature sensors posting data. Needless to say I will have a lot of data to handle even within the scope of an hour. I have implemented a very simple paging api from the server so the server doesn't timeout and is currently only returning data in 1000 units per call, then paging through the data. I had the idea to intially show say the last 20 minutes of data from a sensor (or all sensors depending on user choices), then allowing the user to select other timeframes from which to show data. The issue comes in when you want to view all sensors or an extended time period (say 24 hours). Is there a best practice of handling this large amount of data? Would it be useful to load those first 20 minutes into the live view and then cache into local storage something like the last 24 hours? I haven't been able to find a decent idea of this in use yet even though there are a lot of ways to take this problem. I am just looking for some suggestions as to what might provide a good balance between good performance and not caching the entire data set on the client side (as beyond a week of data this might not be feasible).

    Read the article

  • Get/Post Controller Logic Best Practice

    - by Brian Mains
    In an ASP.NET MVC project (Razor), I have a Get request, which loads two properties on a model, dependent on the property passed into the action method. So if the parameter has a value, the Group property is supplied data. But if not, the Groups collection property is supplied data. In the post action method, when I process the data, to repopulate the view, I have to provide similar logic, and could getaway with returning Action(param) (the get response) to the caller. My question is, based on experience, is that a good practice to get into? I see some downsides to doing that, but adds the lack of code redundancy. Or is there a better alternative?

    Read the article

  • SSD and HDD have window 7 recovery partition. Can I delete one to make room for ubuntu?

    - by Brian Ecker
    I'm trying to install ubuntu right now, and I've run into a problem. I have Windows 7 installed on my SSD, and I want to install ubuntu on my HDD, but I already have three partitions on my HDD. The partitions are two Recovery Partitions and one data partition. What I don't understand is why my data drive(the HDD) has recovery partitions for Windows 7? The same recovery partitions(or atleast I think they are the same. Same sizes, same names, same order) are on the SSD with the Windows 7 install. Can I safely delete the recovery partitions on the HDD? My other option, I think, is to put the boot partition for ubuntu on the SSD where I only have three partitions. Then I can put the other three logical partitions for ubuntu in an extended partition on the HDD. Can I do that, put the boot partition on one drive and the other partitions on another? Here is a picture of the partitions and I have circled the one I would like to delete to make room. http://imgur.com/XOpJQ

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

  • SQL Why is prefixing column names considered bad practice?

    - by P.Brian.Mackey
    According to a popular SO post is it considered a bad practice to prefix table names. At my company every column is prefixed by a table name. This is difficult for me to read. I'm not sure the reason, but this naming is actually the company standard. I can't stand the naming convention, but I have no documentation to back up my reasoning. All I know is that reading AdventureWorks is much simpler. In this our company DB you will see a table, Person and it might have column name: Person_First_Name or maybe even Person_Person_First_Name (don't ask me why you see person 2x) Why is it considered a bad practice to pre-fix column names? Are underscores considered evil in SQL as well? Note: I own Pro SQL Server 2008 - Relation Database design and implementation. References to that book are welcome.

    Read the article

  • What is the most secure environment for multiple CMS sites? [closed]

    - by Brian Gulino
    I wish to run about 50 Joomla or WordPress low-traffic websites on 1 server, or part of a server. Each website will be managed by its own, naive owner who will have be able to access the Joomla or Wordpress backend of the website. I am concerned about security and isolation as my users will periodically get into trouble by not protecting their sites properly. Two alternatives I know of exist: Run one Linux system with multiple websites under Apache. Follow current Joomla and WordPress security tips. Increase the isolation of the individual sites by using mpm-itk, which will allow each website to run as its own user. The alternative to this is to run virtualization software such as the Xen hypervisor. Each site would have its own, virtual Linux system. I lack the experience needed to make this decision and I am asking which path to take. Obviously, there may be other alternatives that I haven't considered.

    Read the article

  • PMDB Block Size Choice

    - by Brian Diehl
    Choosing a block size for the P6 PMDB database is not a difficult task. In fact, taking the default of 8k is going to be just fine. Block size is one of those things that is always hotly debated. Everyone has their personal preference and can sight plenty of good reasons for their choice. To add to the confusion, Oracle supports multiple block sizes withing the same instance. So how to decide and what is the justification? Like most OLTP systems, Oracle Primavera P6 has a wide variety of data. A typical table's average row size may be less than 50 bytes or upwards of 500 bytes. There are also several tables with BLOB types but the LOB data tends not to be very large. It is likely that no single block size would be perfect for every table. So how to choose? My preference is for the 8k (8192 bytes) block size. It is a good compromise that is not too small for the wider rows, yet not to big for the thin rows. It is also important to remember that database blocks are the smallest unit of change and caching. I prefer to have more, individual "working units" in my database. For an instance with 4gb of buffer cache, an 8k block will provide 524,288 blocks of cache. The following SQL*Plus script returns the average, median, min, and max rows per block. column "AVG(CNT)" format 999.99 set verify off select avg(cnt), median(cnt), min(cnt), max(cnt), count(*) from ( select dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) , count(*) cnt from &tab group by dbms_rowid.ROWID_RELATIVE_FNO(rowid) , dbms_rowid.ROWID_BLOCK_NUMBER(rowid) ) Running this for the TASK table, I get this result on a database with an 8k block size. Each activity, on average, has about 19 rows per block. Enter value for tab: task AVG(CNT) MEDIAN(CNT) MIN(CNT) MAX(CNT) COUNT(*) -------- ----------- ---------- ---------- ---------- 18.72 19 3 28 415917 I recommend an 8k block size for the P6 transactional database. All of our internal performance and scalability test are done with this block size. This does not mean that other block sizes will not work. Instead, like many other parameters, this is the safest choice.

    Read the article

  • (Phaser) Preload Future States in Create?

    - by Brian
    I'm a first time user of Phaser, been trying to make a simple point and click type game. I'm trying to keep things very modular, so I'm defining a list of levels (states) in a JSON, and then every level has its own JSON containing the objects within that level. However, I'm encountering an issue in that, when changing states, I get a black flash while the assets for the next state load (this happens whether I iterate through the JSON list or define everything manually). From what I've read, all sprites should be loaded in the preload stage, however, by doing this I'm causing that tiny but noticeable black pause. I know one way would be to simply load every asset at the start of the game, but that seems incredibly inefficient (wouldn't that fill up the memory immensely?). I would rather load a state's assets from the "parent" state. However, in my quick test (which maybe I did wrong) it seems that game.load doesn't work properly if done within the create stage? What is the best approach to doing this?

    Read the article

  • Backtracking Problem

    - by Joshua Green
    Could someone please guide me through backtracking in Prolog-C using the simple example below? Here is my Prolog file: likes( john, mary ). likes( john, emma ). likes( john, ashley ). Here is my C file: #include... term_t tx; term_t tv; term_t goal_term; functor_t goal_functor; int main( int argc, char** argv ) { argv[0] = "libpl.dll"; PL_initialise( argc, argv ); PlCall( "consult( swi( 'plwin.rc' ) )" ); PlCall( "consult( 'likes.pl' )" ); tv = PL_new_term_ref( ); PL_put_atom_chars( tv, "john" ); tx = PL_new_term_ref( ); goal_term = PL_new_term_ref( ); goal_functor = PL_new_functor( PL_new_atom( "likes" ), 2 ); PL_cons_functor( goal_term, goal_functor, tv, tx ); PlQuery q( "likes", ??? ); while ( q.next_solution( ) ) { char* solution; PL_get_atom_chars( tx, &solution ); cout << solution << endl; } PL_halt( PL_toplevel() ? 0 : 1 ); } What should I replace ??? with? Or is this the right approach to get all the backtracking results generated and printed? Thank you,

    Read the article

  • Remove duplicates from a sorted ArrayList while keeping some elements from the duplicates

    - by js82
    Okay at first I thought this would be pretty straightforward. But I can't think of an efficient way to solve this. I figured a brute force way to solve this but that's not very elegant. I have an ArrayList. Contacts is a VO class that has multiple members - name, regions, id. There are duplicates in ArrayList because different regions appear multiple times. The list is sorted by ID. Here is an example: Entry 0 - Name: John Smith; Region: N; ID: 1 Entry 1 - Name: John Smith; Region: MW; ID: 1 Entry 2 - Name: John Smith; Region: S; ID: 1 Entry 3 - Name: Jane Doe; Region: NULL; ID: 2 Entry 4 - Name: Jack Black; Region: N; ID: 3 Entry 6 - Name: Jack Black; Region: MW; ID: 3 Entry 7 - Name: Joe Don; Region: NE; ID: 4 I want to transform the list to below by combining duplicate regions together for the same ID. Therefore, the final list should have only 4 distinct elements with the regions combined. So the output should look like this:- Entry 0 - Name: John Smith; Region: N,MW,S; ID: 1 Entry 1 - Name: Jane Doe; Region: NULL; ID: 2 Entry 2 - Name: Jack Black; Region: N,MW; ID: 3 Entry 3 - Name: Joe Don; Region: NE; ID: 4 What are your thoughts on the optimal way to solve this? I am not looking for actual code but ideas or tips to go about the best way to get it done. Thanks for your time!!!

    Read the article

  • Variable number of two-dimensional arrays into one big array

    - by qlb
    I have a variable number of two-dimensional arrays. The first dimension is variable, the second dimension is constant. i.e.: Object[][] array0 = { {"tim", "sanchez", 34, 175.5, "bla", "blub", "[email protected]"}, {"alice", "smith", 42, 160.0, "la", "bub", "[email protected]"}, ... }; Object[][] array1 = { {"john", "sdfs", 34, 15.5, "a", "bl", "[email protected]"}, {"joe", "smith", 42, 16.0, "a", "bub", "[email protected]"}, ... }; ... Object[][] arrayi = ... I'm generating these arrays with a for-loop: for (int i = 0; i < filter.length; i++) { MyClass c = new MyClass(filter[i]); //data = c.getData(); } Where "filter" is another array which is filled with information that tells "MyClass" how to fill the arrays. "getData()" gives back one of the i number of arrays. Now I just need to have everything in one big two dimensional array. i.e.: Object[][] arrayComplete = { {"tim", "sanchez", 34, 175.5, "bla", "blub", "[email protected]"}, {"alice", "smith", 42, 160.0, "la", "bub", "[email protected]"}, ... {"john", "sdfs", 34, 15.5, "a", "bl", "[email protected]"}, {"joe", "smith", 42, 16.0, "a", "bub", "[email protected]"}, ... ... }; In the end, I need a 2D array to feed my Swing TableModel. Any idea on how to accomplish this? It's blowing my mind right now.

    Read the article

  • Excel VBA Text To Column

    - by Pat
    This is what I currently have: H101 John Doe Jane Doe Jack Doe H102 John Smith Jane Smith Katie Smith Jack Smith And here is what I want: H101 John Doe H101 Jane Doe H101 Jack Doe H102 John Smith H102 Jane Smith H102 Katie Smith H102 Jack Smith Obviously I want to do this on a bigger scale. The number of columns is between 1 & 6, so I cant limit it that way. I was able to get a script that allows me to put each individual on one row. However, I am having a hard time getting the first column to copy over to each row. Sub ToOneColumn() Dim i As Long, k As Long, j As Integer Application.ScreenUpdating = False Columns(2).Insert i = 0 k = 1 While Not IsEmpty(Cells(k, 3)) j = 3 While Not IsEmpty(Cells(k, j)) i = i + 1 Cells(i, 1) = Cells(k, 1) //CODE IN QUESTION Cells(i, 2) = Cells(k, j) Cells(k, j).Clear j = j + 1 Wend k = k + 1 Wend Application.ScreenUpdating = True End Sub Like I said, it was working fine to get everyone each on their own row, but can't figure out how to get that first column. It seems like it should be so simple, but it's bugging me. Any help is greatly appreciated.

    Read the article

  • When does it makes sense to use a map?

    - by kiwicptn
    I am trying to round up cases when it makes sense to use a map (set of key-value entries). So far I have five categories (see below). Assuming more exist, what are they? Please limit each answer to one unique category, put up an example, and vote up the fascinating ones. Property values (like a bean) age -> 30 sex -> male loc -> calgary Histograms peter -> 1 john -> 7 paul -> 5 Presence, with O(1) performance peter -> 1 john -> 1 paul -> 1 Functions peter -> treatPeter() john -> dealWithJohn() paul -> managePaul() Conversion peter -> pierre john -> jean paul -> paul

    Read the article

  • Jquery: Create hidden attributes? I need to reduce tag bulkyness.

    - by Dan
    I'm sure we've all done this before: <a id="232" rel="link_to_user" name="user_details" class="list hoverable clickable selectable"> USER #232 </a> But then we say, oh my, I need more ways to store tracking info about this div! <a id="232-343-22" rel="link_to_user:fotomeshed" name="user_details" class="groupcolor-45 elements-698 list hoverable clickable selectable"> User: John Doe </a> And the sickness keeps growing. We just keep on packing it inside that poor little element and it's attributes. All so we can keep track of who it is. So with my limited knowledge of JS, someone please tell me how to do something like this: <a id="33">USER #33</a> $(#33).attr({title:'User Record','username':'john', 'group_color':'green', 'element_num':78}); So here we just added what I would call invisible attributes, because we just played God and made those attributes up on the fly like it was no problem. The cool part is that these would be kept in their own little object somewhere in variable land. NOT in the tag itself. Then later on, in a code nested far far away, be able to say, oh, i wonder what group_color John is... user_group_color = $(table).find(a['username':'john']).attr('group_color'); THEN BAM!!!! POW!!!! alert(user_group_color + " is a bitchin color!"); You get to know his group color... all without adding a bunch of bloated element tracking nonsense into our tags. So does this sort of thing exist? If not, how do I make it?

    Read the article

  • Graph-structured databases and Php

    - by stagas
    I want to use a graph database using php. Can you point out some resources on where to get started? Is there any example code / tutorial out there? Or are there any other methods of storing data that relate to each other in totally random/abstract situations? - Very abstract example of the relations needed: John relates to Mary, both relate to School, John is Tall, Mary is Short, John has Blue Eyes, Mary has Green Eyes, query I want is which people are related to 'Short people that have Green Eyes and go to School' - answer John - Another example: TrackA -> ArtistA -> ArtistB -> AlbumA -----> [ label ] -> AlbumB -----> [ A ] -> TrackA:Remix -> Genre:House -> [ Album ] -----> [ label ] TrackB -> [ C ] [ B ] Example queries: Which Genre is TrackB closer to? answer: House - because it's related to Album C, which is related to TrackA and is related to Genre:House Get all Genre:House related albums of Label A : result: AlbumA, AlbumB - because they both have TrackA which is related to Genre:House - It is possible in MySQL but it would require a fixed set of attributes/columns for each item and a complex non-flexible query, instead I need every attribute to be an item by itself and instead of 'belonging' to something, to be 'related' to something.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >