Search Results

Search found 4466 results on 179 pages for 'red potato'.

Page 13/179 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Why does my Opengl es android testbed app not render anything besides a red screen?

    - by nathan
    For some reason my code here (this is the entire thing) doesnt actually render anything besides a red screen.. can anyone tell me why? package com.ntu.way2fungames.earth.testbed; import java.nio.FloatBuffer; import javax.microedition.khronos.egl.EGLConfig; import javax.microedition.khronos.opengles.GL10; import android.app.Activity; import android.content.Context; import android.opengl.GLSurfaceView; import android.opengl.GLSurfaceView.Renderer; import android.os.Bundle; public class projectiles extends Activity { GLSurfaceView lGLView; Renderer lGLRenderer; float projectilesX[]= new float[5001]; float projectilesY[]= new float[5001]; float projectilesXa[]= new float[5001]; float projectilesYa[]= new float[5001]; float projectilesTheta[]= new float[5001]; float projectilesSpeed[]= new float[5001]; private static FloatBuffer drawBuffer; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); SetupProjectiles(); Context mContext = this.getWindow().getContext(); lGLView= new MyView(mContext); lGLRenderer= new MyRenderer(); lGLView.setRenderer(lGLRenderer); setContentView(lGLView); } private void SetupProjectiles() { int i=0; for (i=5000;i>0;i=i-1){ projectilesX[i] = 240; projectilesY[i] = 427; float theta = (float) ((i/5000)*Math.PI*2); projectilesXa[i] = (float) Math.cos(theta); projectilesYa[i] = (float) Math.sin(theta); projectilesTheta[i]= theta; projectilesSpeed[i]= (float) (Math.random()+1); } } public class MyView extends GLSurfaceView{ public MyView(Context context) { super(context); // TODO Auto-generated constructor stub } } public class MyRenderer implements Renderer{ private float[] projectilecords = new float[] { .0f, .5f, 0, -.5f, 0f, 0, .5f, 0f, 0, 0, -5f, 0, }; @Override public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT); gl.glMatrixMode(GL10.GL_MODELVIEW); //gl.glLoadIdentity(); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); for (int i=5000;i>4500;i=i-1){ //drawing section gl.glLoadIdentity(); gl.glColor4f(.9f, .9f,.9f,.9f); gl.glTranslatef(projectilesY[i], projectilesX[i],1); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, drawBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 12); //physics section projectilesX[i]=projectilesX[i]+projectilesXa[i]; projectilesY[i]=projectilesY[i]+projectilesYa[i]; } gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { if (height == 0) height = 1; // draw on the entire screen gl.glViewport(0, 0, width, height); // setup projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrthof(0,width,height,0, -100, 100); } @Override public void onSurfaceCreated(GL10 gl, EGLConfig arg1) { gl.glShadeModel(GL10.GL_SMOOTH); gl.glClearColor(1f, .01f, .01f, 1f); gl.glClearDepthf(1.0f); gl.glEnable(GL10.GL_DEPTH_TEST); gl.glDepthFunc(GL10.GL_LEQUAL); gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); drawBuffer = FloatBuffer.wrap(projectilecords); } } }

    Read the article

  • Estudio de caso: CFO de At&T le apunta a la tecnología para transformar las Finanzas Globales

    - by RED League Heroes-Oracle
    AT&T es una de las pocas multinacionales modernas que han participado de todas las etapas anteriores de la innovación de las telecomunicaciones, de Alexander Graham Bell como inventor solitario, a Bell Labs, a los lanzamientos acelerados en las fundiciones de AT&T. La tecnología es el corazón de todo lo que AT&T hace, incluyendo sus inversiones en innovaciones tecnológicas para permitir que las finanzas de AT&T trabajen más estratégicamente con los negocios para asegurar que las inversiones en las iniciativas de crecimiento sean exitosas. Según John Stephens, Vicepresidente Ejecutivo y director financiero de AT&T, la empresa ha trazado un plan de inversión de tres años para mejorar y aumentar sus redes IP de banda ancha alámbricas e inalámbricas. El plan incluye la implementación del servicio 4G LTE para 300 millones de personas en los Estados Unidos, expansión de IP de banda ancha de alta velocidad a unos 57 millones de hogares de clientes y una expansión de la fibra a 1 millón de clientes corporativos adicionales en su área de servicio de telefonía fija. "La necesidad de velocidad es mayor que nunca, y este proyecto es nuestro paso hacia la innovación para ofrecer tal velocidad," dice Stephens. Como AT&T moderniza su infraestructura global, sus procesos operacionales se hacen tan poderosos como su red. Ha sido una tarea grande y compleja, pero Stephens se complace al decir que el departamento de finanzas de AT&T ha adoptado su papel de catalizador corporativo. Empezó con un concepto simple: "Vamos a hacer que todos hablen en el mismo idioma". Esto llevó a la consolidación de sistemas financieros heredados de las empresas adquiridas. No fue una tarea sencilla, dado que la empresa pasó por más de cinco adquisiciones importantes y un sinfín de otras transacciones. En 2007, AT&T tenía 17 aplicaciones apenas en la función de cuentas por pagar. Hoy, el número se ha reducido a dos. Asimismo, hubo 50 sistemas de reportes gerenciales oficiales y ahora hay tres, con planes de excluir uno de ellos. Al tener un único lenguaje volcado a las Finanzas en toda la empresa, el equipo de finanzas de AT&T ha eliminado las varias versiones de los mismos datos, reduciendo la posible confusión en las discusiones y en las decisiones de estrategia de negocios. Estos pasos también han reducido los costos y aceleraron la toma de decisiones. "Lo lindo de los sistemas es que permiten que la gente talentosa con habilidades analíticas usen su tiempo en esa zona, en vez de gastar tiempo en recolección, agregación y organización de los datos," señala Stephens. "Tenemos un proceso eficiente y eficaz que hace que nosotros, dejemos a la gente libre para dedicarse a aquello en que son realmente buenos. Y tenemos un equipo de alta calidad y ellos están en su mejor punto cuando son capaces de hacer su función para apoyar a la unidad de negocio”AT&T es una de las pocas multinacionales modernas que han participado de todas las etapas anteriores de la innovación de las telecomunicaciones, de Alexander Graham Bell como inventor solitario, a Bell Labs, a los lanzamientos acelerados en las fundiciones de AT&T. La tecnología es el corazón de todo lo que AT&T hace, incluyendo sus inversiones en innovaciones tecnológicas para permitir que las finanzas de AT&T trabajen más estratégicamente con los negocios para asegurar que las inversiones en las iniciativas de crecimiento sean exitosas.  Según John Stephens, Vicepresidente Ejecutivo y director financiero de AT&T, la empresa ha trazado un plan de inversión de tres años para mejorar y aumentar sus redes IP de banda ancha alámbricas e inalámbricas. El plan incluye la implementación del servicio 4G LTE para 300 millones de personas en los Estados Unidos, expansión de IP de banda ancha de alta velocidad a unos 57 millones de hogares de clientes y una expansión de la fibra a 1 millón de clientes corporativos adicionales en su área de servicio de telefonía fija. "La necesidad de velocidad es mayor que nunca, y este proyecto es nuestro paso hacia la innovación para ofrecer tal velocidad," dice Stephens. Como AT&T moderniza su infraestructura global, sus procesos operacionales se hacen tan poderosos como su red. Ha sido una tarea grande y compleja, pero Stephens se complace al decir que el departamento de finanzas de AT&T ha adoptado su papel de catalizador corporativo. Empezó con un concepto simple: "Vamos a hacer que todos hablen en el mismo idioma". Esto llevó a la consolidación de sistemas financieros heredados de las empresas adquiridas. No fue una tarea sencilla, dado que la empresa pasó por más de cinco adquisiciones importantes y un sinfín de otras transacciones. En 2007, AT&T tenía 17 aplicaciones apenas en la función de cuentas por pagar. Hoy, el número se ha reducido a dos. Asimismo, hubo 50 sistemas de reportes gerenciales oficiales y ahora hay tres, con planes de excluir uno de ellos. Al tener un único lenguaje volcado a las Finanzas en toda la empresa, el equipo de finanzas de AT&T ha eliminado las varias versiones de los mismos datos, reduciendo la posible confusión en las discusiones y en las decisiones de estrategia de negocios. Estos pasos también han reducido los costos y aceleraron la toma de decisiones. "Lo lindo de los sistemas es que permiten que la gente talentosa con habilidades analíticas usen su tiempo en esa zona, en vez de gastar tiempo en recolección, agregación y organización de los datos," señala Stephens. "Tenemos un proceso eficiente y eficaz que hace que nosotros, dejemos a la gente libre para dedicarse a aquello en que son realmente buenos. Y tenemos un equipo de alta calidad y ellos están en su mejor punto cuando son capaces de hacer su función para apoyar a la unidad de negocio”

    Read the article

  • Databinding, using formulas for unusual binding possible?

    - by Rattenmann
    Edit: added Info for WPF being used I am trying to bind a list of custom objects to a DataGrid. Straight binding seems easy enough, but i need to specify some complex formulas for some extra fields that do not directly show up in my class. Also i want to be able to EDIT the data in the Grid and get updates on related fields. Let me show you an example, because it is really hard to explain. I will simplify it to rooms with items. Each item can be red and blue. My Class looks like this: public class room { public string strRoomName { set; get; } public string strItemname { set; get; } public int intRedItem { set; get; } public int intBlueItem { set; get; } } Now if i use dataTable.ItemSource = myList; i get something like this: nr. | room | name | red | blue 1. living room, ball, 2, 1 2. sleeping room, bunny, 4, 1 3. living room, chair, 3, 2 4. kitchen, ball, 4, 7 5. garage, chair, 1, 4 Now for the complex part i need help with. I want every item to be the same number, red and blue. And because this does not hold true i want to see the "inbalance" per room AND globally like this: nr. | room | name | red | blue | missing | global red | global blue | global missing 1. living room, ball, 2, 1, 1 blue, 6, 7, 1 red 2. sleeping room, bunny, 4, 1, 3 blue, 4, 1, 3 blue 3. living room, chair, 3, 2, 1 blue, 4, 6, 2 red 4. kitchen, ball, 4, 7, 3 red, 6, 7, 1 red 5. garage, chair, 1, 4, 3 red, 4, 6, 2 red As you can see this smeels like excel formulas, i am unsure how to handle this in c# code however. You can also see i need to use data in the same row, but also get data from other rows that match one propertiy (the items name). Also if i change the blue value=1 in line 1 to value=2, i want line 1 to read like this: 1. living room, ball, 2, 2, even, 6, 8, 2 red and of corse line 4 needs to change to: 4. kitchen, ball, 4, 7, 3 red, 6, 8, 2 red As i said, this smells like excel, that's why i am really upset about myself not finding an easy solution. Surely enough c# offers some way to handle this stuff, right? Disclaimer: It is totally possible that i need a complete differend approach, pointing that out ot me is perfectly fine. Be it other ways to handle this, or a better way to structure my class. I am ok with every way to handle this as it is for learning purposes. I am simply doing programms for fun next to my college and just so happen to hit these kinda things that bug me out because i don't find a clean solution. And then i neglect my studies because i want to solve my (unreleated to studys,...) issue. Just can't stand having unsolved coding stuff around, don't judge me! ;-) And big thanks in advance if you have gotten this far in my post. It sure must be confusing with all those reds and blues. Edit: After reading trough your answers and testing my skills to implement your hints, i now have the following code as my class: public class RoomList : ObservableCollection<room> { public RoomList() : base() { Add(new room() { strRoomName = "living room", strItemname = "ball", intRedItem = 2, intBlueItem = 1 }); Add(new room() { strRoomName = "sleeping room", strItemname = "bunny", intRedItem = 4, intBlueItem = 1 }); Add(new room() { strRoomName = "living room", strItemname = "chair", intRedItem = 3, intBlueItem = 2 }); Add(new room() { strRoomName = "kitchen", strItemname = "ball", intRedItem = 4, intBlueItem = 7 }); Add(new room() { strRoomName = "garage", strItemname = "chair", intRedItem = 1, intBlueItem = 4 }); } } //rooms public class room : INotifyPropertyChanged { public string strRoomName { set; get; } public string strItemname { set; get; } public int intRedItem { get { return intRedItem; } set { intRedItem = value; NotifyPropertyChanged("intRedItem", "strMissing"); } } public int intBlueItem { get { return intBlueItem; } set { intBlueItem = value; NotifyPropertyChanged("intBlueItem", "strMissing"); } } public string strMissing { get { int missingCount = intRedItem - intBlueItem; return missingCount == 0 ? "Even" : missingCount.ToString(); } } public event PropertyChangedEventHandler PropertyChanged; public void NotifyPropertyChanged(params string[] propertyNames) { if (PropertyChanged != null) { foreach (string propertyName in propertyNames) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } } } I got the "missing" field working right away, thanks alot for that tip. It really was as easy as i imagined and will be of great use for future projects. Still two (three maybe....) things i am missing tho. The above code terminates with a "System.StackOverflowException" in the setter of intRedItem and intBlueItem. I fail to see the error, that could be due to being 4:30am here, or my lack of understanding. Second issue: I followed the link to ObservableCollections as you can see from my code above. Yet i am unsure how to actually use that collection. Putting it as DataContent like suggested on that page shows a missing ressource. Adding it as a ressource like listed there crashes my VSExpress designer and leads to the programm not starting. So for now i am still using my old approach of a list like this: listRooms.Add(new room() { strRoomName = "living room", strItemname = "ball", intRedItem = 2, intBlueItem = 1 }); listRooms.Add(new room() { strRoomName = "sleeping room", strItemname = "bunny", intRedItem = 4, intBlueItem = 1 }); listRooms.Add(new room() { strRoomName = "living room", strItemname = "chair", intRedItem = 3, intBlueItem = 2 }); listRooms.Add(new room() { strRoomName = "kitchen", strItemname = "ball", intRedItem = 4, intBlueItem = 7 }); listRooms.Add(new room() { strRoomName = "garage", strItemname = "chair", intRedItem = 1, intBlueItem = 4 }); datagridRooms.ItemsSource = listRooms; And lastly: When testing before adding the notifyevents i tried to implement a proterty that looped trough the other objects, without any luck. The "missingItem" property worked so easy, yet it only tries to access "it's own" properties kind of. I need to access other objects, like "all objects that have the same room value". The idea behind this is that i am trying to calculate a value from other objects without even having those objects yet, at least in my logic. Where is the flaw in my thinking? Those 5 objects are added and created (?) one after another. So if the first tries to set it's "all red balls in my room AND all other rooms" value,.. how could it know about the balls in the kitchen, that get added as 4th object? So far so good tho, got on the right track i think. Just need some sleep first.

    Read the article

  • validation functions in google apps script is not working properly

    - by chocka
    I create a i/p form in google site using Apps Script and i did the validation using the Apps Script coding. Validation functions available in Apps script is not satisfying all the possibility of checking the error. function validate(e) { var app = UiApp.getActiveApplication(); var flag=0; var text = app.getElementById('name'); var textrequired = app.getElementById('namerequired'); var number = app.getElementById('number'); var numberrequired = app.getElementById('numberrequired'); var email = app.getElementById('email'); var emailrequired = app.getElementById('emailrequired'); var submit = app.getElementById('submit_button'); var valid = app.createClientHandler() .validateNumber(number) .validateNotInteger(text) .validateEmail(email) .forTargets(submit).setEnabled(true) .forTargets(number,text,email).setStyleAttribute("color","black") .forTargets(numberrequired,textrequired,emailrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); var invalidno = app.createClientHandler().validateNotNumber(number).validateMatches(number, '').forTargets(number).setStyleAttribute("color", "red").forTargets(submit).setEnabled(false).forTargets(numberrequired).setText('Please Enter a Valid No.').setStyleAttribute("color", "red").setVisible(true); var validno = app.createClientHandler().validateNumber(number).forTargets(number).setStyleAttribute("color","black").forTargets(numberrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); var invalidText=app.createClientHandler().validateNumber(text).validateMatches(text, '').forTargets(text).setStyleAttribute("color", "red").forTargets(submit).setEnabled(false).forTargets(textrequired).setText('Please Enter a Valid Name.').setStyleAttribute("color", "red").setVisible(true); var validText=app.createClientHandler().validateNotNumber(text).forTargets(text).setStyleAttribute("color","black").forTargets(textrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); var invalidemail=app.createClientHandler().validateNotEmail(email).validateMatches(email, '').forTargets(email).setStyleAttribute("color", "red").forTargets(submit).setEnabled(false).forTargets(emailrequired).setText('Please Enter a Valid Mail-Id.').setStyleAttribute("color", "red").setVisible(true); var validemail=app.createClientHandler().validateEmail(email).forTargets(email).setStyleAttribute("color","black").forTargets(emailrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); number.addKeyPressHandler(invalidno).addKeyPressHandler(validno).addKeyPressHandler(valid).addKeyPressHandler(invalidText).addKeyPressHandler(invalidemail); text.addKeyPressHandler(invalidText).addKeyPressHandler(validText).addKeyPressHandler(valid).addKeyPressHandler(invalidno).addKeyPressHandler(invalidemail); email.addKeyPressHandler(invalidemail).addKeyPressHandler(validemail).addKeyPressHandler(valid).addKeyPressHandler(invalidno).addKeyPressHandler(invalidText); if (text == ''){flag = 1;} if (email == ''){flag = 1;} if (number == ''){flag = 1;} if(flag == 1){submit.setEnabled(false);} return app; } I just placed my Validation function using Apps Script. I don't know why its not satisfying all the possibilities of the validation. And also i have to do is to enable the submit button after all the fields satisfy the validation. After once it enabled, if i make any error in any field it will not get disable correctly. I wrote the coding correctly i think so. Please take a look at my validation function and give me some suggestion to make it possible. Please guide me, Thanks & Regards, chocka.

    Read the article

  • CSS selector for first element with class

    - by Rajat
    I have a bunch of elements with a classname <p class="red"></p> <div class="red"></div> I cant seem to select the first element with the class="red" using the following CSS rule: .red:first-child{ border:5px solid red; } What is wrong in this selector and how to correct it ??

    Read the article

  • Creating array from two arrays

    - by binoculars
    I'm having troubles trying to create a certain array. Basicly, I have an array like this: [0] => Array ( [id] => 12341241 [type] => "Blue" ) [1] => Array ( [id] => 52454235 [type] => "Blue" ) [2] => Array ( [id] => 848437437 [type] => "Blue" ) [3] => Array ( [id] => 387372723 [type] => "Blue" ) [4] => Array ( [id] => 73732623 [type] => "Blue" ) ... Next, I have an array like this: [0] => Array ( [id] => 34141 [type] => "Red" ) [1] => Array ( [id] => 253532 [type] => "Red" ) [2] => Array ( [id] => 94274 [type] => "Red" ) I want to construct an array, which is a combination of the two above, using this rule: after 3 Blues, there must be a Red: Blue1 Blue2 Blue3 Red1 Blue4 Blue5 Blue6 Red2 Blue7 Blue8 Blue9 Red3 Note that the their can be more Red's than Blue's, but also more Blue's than Red's. If the Red's run out, it should begin with the first one again. Example: let's say there are only two Red's: Blue1 Blue2 Blue3 Red1 Blue4 Blue5 Blue6 Red2 Blue7 Blue8 Blue9 Red1 ... ... If the Blue's run out, the Red's should append until they run out too. Example: let's say there are 5 Blue's, and 5 Red's: Blue1 Blue2 Blue3 Red1 Blue4 Blue5 Red2 Red3 Red4 Red5 Note: the arrays come from mysql-fetches. Maybe it's better to fetch them while building the new array? Anyway, the while-loops got to me, I can't figure it out... Any help is much appreciated!

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Before the Summit of 2012

    - by Ajarn Mark Caldwell
    Today, Monday, was the first day of the PASS Summit Preconference training events, but instead I spent the day at the free SQL in the City event put on by Red Gate. For me this was not a financial decision (pre-con sessions cost extra above the general Summit registration) but rather a matter of interest.  I had already included money for pre-cons in this year’s training budget, but none of them really stood out to me, so even if the Red-Gate event were not going on at the same time, I probably would not have gone to any pre-cons this year.  However, the topics being presented at the SQL in the City event were of great interest to me.  There promised to be good information on Continuous Integration and automated deployment of database changes, which lately has been a real hot topic at my work.  And indeed, Red-Gate announced the release of a new tool (still in Early Access Program…a.k.a. Beta) which is called the Deployment Manager.  Since we are in the middle of a TFS implementation project, it will be interesting to see how this plays out and compares to what we put together with the automated builds in TFS.  But, as I understand it, the primary focus of Deployment Manager is not to be the Build process (Red Gate uses JetBrains’ Team City for that in their shop) but rather to aid in the deployment of those build packages, as well as providing easy rollback and a good visualization of which versions of software are in which environments.  It looks promising and I’ve already downloaded the installer package to play with it later. Overall, I was quite impressed with the SQL in the City event.  Having heard many current and past members of the PASS Board of Directors describe the challenges of putting on a large conference, and the growing pains that the PASS Summit has gone through, I am even more impressed that the Red Gate event ran as smoothly as it did.  And it is quite impressive the amount of money that Red Gate must have spent given that this was a no-charge event to attend, they had a very nice hot lunch, and the after-event drinks celebration.  Well done, folks! Of course it was great to hear from a variety of speakers.  Today I listened to some folks from Red Gate like Grant Fritchey (blog | @GFritchey) and David Atkinson (Product Manager for SQL Source Control and now the Deployment Manager tool set); and also Brent Ozar (blog | @BrentO) and Buck Woody (blog | @BuckWoody).  By the way, if you have never seen either Brent or Buck speak, you really should.  Different styles, but both are very entertaining and educational at the same time.  I love Buck’s sense of humor (here’s a tip…don’t be late to Buck’s session or you’ll become part of the presentation) and I praise Brent’s slides.  Brent’s style very much reminds me of that espoused by Garr Reynolds on his Presentation Zen blog (and book) and I am impressed that he can make a technical presentation so engaging. It was a great day, a great way to kick off the week, and I am excited to get into the full Summit!

    Read the article

  • nth child selector in jquery

    - by Praveen Prasad
    <table width="600px" id='testTable'> <tr class="red"><td>this</td></tr> <tr><td>1</td></tr> <tr><td>1</td></tr> <tr><td>1</td></tr> <tr class="red"><td>1</td></tr> <tr><td>1</td></tr> <tr><td>1</td></tr> <tr class="red"><td>this</td></tr> <tr><td>1</td></tr> <tr><td>1</td></tr> <tr class="red"><td>1</td></tr> <tr><td>1</td></tr> <tr><td>1</td></tr> </table> .gray { background-color:#dddddd; } .red { color:Red; } $(function () { $('#testTable tr.red:nth-child(odd)').addClass('gray'); //this should select tr's with text=this, but its not happening }); i want to select all odds inside table which have class=red , but its not happening. please help

    Read the article

  • Array, change color, as3

    - by pixelGreaser
    Hi Thanks for the help Yesterday, but I have on more question. How can I change color of text on certain words? My animation plays the text animation of THIS SALE IS RED HOT!!! I want RED HOT it to be red. It seems the array can be indexed in such a way to switch the color from Blue to Red. MY BANNER ADD var myArray:Array = ["THIS","SALE","IS","RED HOT!!!",]; var tm:Timer = new Timer(500); tm.addEventListener(TimerEvent.TIMER, countdown); function countdown(event:TimerEvent) { tx.text = myArray[(tm.currentCount-1)%myArray.length]; } tm.start(); tx.textColor = 0x0000FF; Cont...PSEUDO CODE //var myArray:Array = ["This","Sale","is","RED HOT!!!",]; var spliceRedhot = myArray.splice(-1); //trace(myArray[2]); trace(spliceRedhot); function mySplice(e:Event):void{ if (spliceRedhot = 4){ //Make RED HOT!!! red tx.textColor = 0xFF0000; } else{ //Text is Blue again tx.textColor = 0x0000FF; } }

    Read the article

  • Subdomains not working with virtual hosts on apache2 ubuntu

    - by cy834sh4rk
    I'm trying to set up a subdomain on my ec2 account but can't figure out what's going on. I've looked for a few hours and haven't been able to find an answer :-/ I'm trying to set up a subdomain using virtual hosts but no matter what I try the browser can't find the subdomain :-( I have the following vhosts files set up: apache2/sites-available/mysite (this site currently works) <VirtualHost *:80 ServerName mysite.com ServerAdmin webmaster@localhost DocumentRoot /home/sites/mysite <Directory /home/sites/mysite Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory ErrorLog ${APACHE_LOG_DIR}/mysite-error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/mysite-access.log combined </VirtualHost apache2/sites-available/red (this is the subdomain I'm trying to set up) <VirtualHost *:80 ServerName red.mysite.com ServerAdmin webmaster@localhost DocumentRoot /var/www/red <Directory /var/www/red Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory ErrorLog ${APACHE_LOG_DIR}/red-error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/red-access.log combined </VirtualHost Apache mod_rewrite is enabled. I've enabled both sites using a2ensite and I make sure I restart apache every time I make a change. /etc/hosts 127.0.0.1 localhost 127.0.0.1 mysite.com 127.0.0.1 red.mysite.com Any help would be appreciated. Thanks!

    Read the article

  • How to get OCI lib to work on red hat machine to work with R Oracle?

    - by Matt Bannert
    I need to get OCI lib working on my rhel 6.3 machine and I am experiencing some trouble with OCI headers files that can't be found. I have installed (using yum install) oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64.rpm because this official page it's all I need to run OCI. To test the whole thing in general I've installed sqplus64, which worked after I set export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib. Unfortunately the headers files couldn't be found after setting LD_LIBRARY_PATH. Actually I am not surprised because there is no include directory in any of these oracle paths. So the question is: Where do I get these missing header files from? Are they actually already there and I just can find them? Btw: I am doing this whole exercise because I want to use ROracle on my R Studio server and this R package depends on the OCI library. Once I am back in R territory the road gets much less bumpier for me. EDIT: this documentation helped me a little further. However, I guess I found some header files now in: "/usr/include/oracle/11.2/client64". But which variable do I have to set to this location?

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • Excel Issues macro may be needed

    - by user124643
    I trying to compare lists in excel. There are two lists, one list just has one column and the other has two columns, and what I am trying to do is when column A matches column C than take the value in column D and use that to replace column A. For example: Column A Column B Column C Column D Blue Blue Shirt Blue Red Pants Red Green Shoes Red Green Green Purple So the completed list should look like: Column A Column B Column C Column D Shirt Blue Shirt Shirt Red Pants Pants Green Shoes Pants Shoes Shoes Purple

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • A simple explanation of Naive Bayes Classification

    - by Jaggerjack
    I am finding it hard to understand the process of Naive Bayes, and I was wondering if someone could explained it with a simple step by step process in English. I understand it takes comparisons by times occurred as a probability, but I have no idea how the training data is related to the actual dataset. Please give me an explanation of what role the training set plays. I am giving a very simple example for fruits here, like banana for example training set--- round-red round-orange oblong-yellow round-red dataset---- round-red round-orange round-red round-orange oblong-yellow round-red round-orange oblong-yellow oblong-yellow round-red

    Read the article

  • Events in Classes (VB.NET)

    - by Otaku
    I find that I write a lot of code within my classes to keep properties in sync with each other. I've read about Events in Classes, but have not been able to wrap my head around how to make them work for what I'm looking for. I could use some advice here. For example, in this one I always want to keep myColor up to date with any change whatsoever in any or all of the Red, Green or Blue properties. Class myColors Private Property Red As Byte Private Property Green As Byte Private Property Blue As Byte Private Property myColor As Color Sub New() myColor = Color.FromArgb(0, 0, 0) End Sub Sub ChangeRed(ByVal r As Byte) Red = r myColor = Color.FromArgb(Red, Green, Blue) End Sub Sub ChangeBlue(ByVal b As Byte) Blue = b myColor = Color.FromArgb(Red, Green, Blue) End Sub End Class If one or more of those changes, I want myColor to be updated. Easy enough as above, but is there a way to work with events that would automatically do this so I don't have to put myColor = Color.FromArgb(Red, Green, Blue) in every sub routine?

    Read the article

  • Trying to create a RegEx for the following patterns

    - by Travis
    Here are the patterns: Red,Green (and so on...) Red (+5.00),Green (+6.00) (and so on...) Red (+5.00,+10.00),Green (+6.00,+20.00) (and so on...) Red (+5.00),Green (and so on...) Each attribute ("Red,"Green") can have 0, 1, or 2 modifiers (shown as "+5.00,+10.00", etc.). I need to capture each of the attributes and their modifiers as a single string (i.e. "Red (+5.00,+10.00)", "Green (+6.00,+20.00)". Help?

    Read the article

  • Trying to create a Reg Ex for the following patterns

    - by Travis
    Here are the patterns: Red,Green (and so on...) Red (+5.00),Green (+6.00) (and so on...) Red (+5.00,+10.00),Green (+6.00,+20.00) (and so on...) Red (+5.00),Green (and so on...) Each attribute ("Red,"Green") can have 0, 1, or 2 modifiers (shown as "+5.00,+10.00", etc.). I need to capture each of the attributes and their modifiers as a single string (i.e. "Red (+5.00,+10.00)", "Green (+6.00,+20.00)". Help?

    Read the article

  • Is there a way to apply a CSS class from within a style?

    - by zashu
    I'm trying to be more modular in my CSS style sheets and was wondering if there is some feature like an include or apply that allows the author to apply a set of styles dynamically. Since I am having a hard time wording the question, perhaps an example will make more sense. Let's say, for example, I have the following CSS: .red {color:#e00b0b} #footer a {font-size:0.8em} h2 {font-size:1.4em; font-weight:bold;} In my page, let's say that I want both the footer links and h2 elements to use the special red color (there may be other locations I would like to use it as well). Ideally, I would like to do something like the following: .red {color:#e00b0b} #footer a {font-size:0.8em; apply-class:".red";} h2 {font-size:1.4em; font-weight:bold; apply-class:".red";} To me, this feels "modular" in a way because I can make modifications to the .red class without having to worry so much about where it is used, and other locations can use the styles in that class without worrying about, specifically, what they are. I understand that I have the following options and have included why, in my fairly inexperienced opinion, they are less-than-perfect: Add the color property to every element I want to be that color. Not ideal because, if I change the color, I have to update every rule to match the new color. Add the red class to every element I want to be red. Not ideal because it means that my HTML is dictating presentation. Create an additional rule that selects every element I want to be red and apply the color property to that. Not ideal because it is harder to find all of the rules that style a specific element, making maintenance more of a challenge Maybe I'm just being an ass and the following options are the only options and I should stick with them. I'm wondering, however, if the "ideal" (well, my ideal) method exists and, if so, what is the proper syntax? If it doesn't exist, option 3 above seems like my best bet. However, I would like to get confirmation.

    Read the article

  • Moved sitemaps to a different subdomain and losing search referrals around the same time. Red herring or correlation?

    - by er1234
    We started to lose search referral traffic around the same time that I moved some of our sitemaps to a subdomain. Could this have hurt us? I followed Google's steps to creating a sitemap under a different subdomain. The new sitemaps.foo.com subdomain is being crawled and indexed well. Both www.foo.com and sitemaps.foo.com have been verified in Google Webmaster Tools. They appear as distinct sites. Is this correct? I can't find a way in Webmaster Tools to say "Hey, sitemaps.foo.com is really owned by www.foo.com, so show them together and make sure to attribute sitemaps.foo urls to www.foo" Our www.foo.com/robots.txt Sitemap: http://www.foo.com/sitemap.xml Sitemap: http://sitemaps.foo.com/subdir/sitemap.xml.gz

    Read the article

  • LSI RAID-on-chip with RAID6 over two SAS links goes red when HDD enclosure is powered cycled; how to recover?

    - by GregC
    I have a RAID6 array managed by LSI 9286-8e card. I also have Sans Digital 24-bay NexentaSTOR JBOD enclosure with SAS extender built-in. They are connected to separate UPS devices. Normally, I'd shut down the PC, leaving RAID6 in healthy state. But today the power to JBOD enclosure was cut but PC kept running. After restarting the PC, all disks in RAID6 have lit up RED, and the only option in LSI MegaRAID manager app was to reset each disk to unassigned, thereby loosing all data on RAID6 array. Thankfully, I am only testing, but how would I recover if this were to happen in production?

    Read the article

  • iPhone Image Processing--matrix convolution

    - by James
    I am implementing a matrix convolution blur on the iPhone. The following code converts the UIImage supplied as an argument of the blur function into a CGImageRef, and then stores the RGBA values in a standard C char array. CGImageRef imageRef = imgRef.CGImage; int width = imgRef.size.width; int height = imgRef.size.height; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *pixels = malloc((height) * (width) * 4); NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * (width); NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(pixels, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); CGContextRelease(context); Then the pixels values stored in the pixels array are convolved, and stored in another array. unsigned char *results = malloc((height) * (width) * 4); Finally, these augmented pixel values are changed back into a CGImageRef, converted to a UIImage, and the returned at the end of the function with the following code. context = CGBitmapContextCreate(results, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGImageRef finalImage = CGBitmapContextCreateImage(context); UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)]; CGImageRelease(finalImage); NSLog(@"edges found"); free(results); free(pixels); CGColorSpaceRelease(colorSpace); return newImage; This works perfectly, once. Then, once the image is put through the filter again, very odd, unprecedented pixel values representing input pixel values that don't exist, are returned. Is there any reason why this should work the first time, but then not afterward? Beneath is the entirety of the function. -(UIImage*) blur:(UIImage*)imgRef { CGImageRef imageRef = imgRef.CGImage; int width = imgRef.size.width; int height = imgRef.size.height; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); unsigned char *pixels = malloc((height) * (width) * 4); NSUInteger bytesPerPixel = 4; NSUInteger bytesPerRow = bytesPerPixel * (width); NSUInteger bitsPerComponent = 8; CGContextRef context = CGBitmapContextCreate(pixels, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); CGContextRelease(context); height = imgRef.size.height; width = imgRef.size.width; float matrix[] = {0,0,0,0,1,0,0,0,0}; float divisor = 1; float shift = 0; unsigned char *results = malloc((height) * (width) * 4); for(int y = 1; y < height; y++){ for(int x = 1; x < width; x++){ float red = 0; float green = 0; float blue = 0; int multiplier=1; if(y>0 && x>0){ int index = (y-1)*width + x; red = matrix[0]*multiplier*(float)pixels[4*(index-1)] + matrix[1]*multiplier*(float)pixels[4*(index)] + matrix[2]*multiplier*(float)pixels[4*(index+1)]; green = matrix[0]*multiplier*(float)pixels[4*(index-1)+1] + matrix[1]*multiplier*(float)pixels[4*(index)+1] + matrix[2]*multiplier*(float)pixels[4*(index+1)+1]; blue = matrix[0]*multiplier*(float)pixels[4*(index-1)+2] + matrix[1]*multiplier*(float)pixels[4*(index)+2] + matrix[2]*multiplier*(float)pixels[4*(index+1)+2]; index = (y)*width + x; red = red+ matrix[3]*multiplier*(float)pixels[4*(index-1)] + matrix[4]*multiplier*(float)pixels[4*(index)] + matrix[5]*multiplier*(float)pixels[4*(index+1)]; green = green + matrix[3]*multiplier*(float)pixels[4*(index-1)+1] + matrix[4]*multiplier*(float)pixels[4*(index)+1] + matrix[5]*multiplier*(float)pixels[4*(index+1)+1]; blue = blue + matrix[3]*multiplier*(float)pixels[4*(index-1)+2] + matrix[4]*multiplier*(float)pixels[4*(index)+2] + matrix[5]*multiplier*(float)pixels[4*(index+1)+2]; index = (y+1)*width + x; red = red+ matrix[6]*multiplier*(float)pixels[4*(index-1)] + matrix[7]*multiplier*(float)pixels[4*(index)] + matrix[8]*multiplier*(float)pixels[4*(index+1)]; green = green + matrix[6]*multiplier*(float)pixels[4*(index-1)+1] + matrix[7]*multiplier*(float)pixels[4*(index)+1] + matrix[8]*multiplier*(float)pixels[4*(index+1)+1]; blue = blue + matrix[6]*multiplier*(float)pixels[4*(index-1)+2] + matrix[7]*multiplier*(float)pixels[4*(index)+2] + matrix[8]*multiplier*(float)pixels[4*(index+1)+2]; red = red/divisor+shift; green = green/divisor+shift; blue = blue/divisor+shift; if(red<0){ red=0; } if(green<0){ green=0; } if(blue<0){ blue=0; } if(red>255){ red=255; } if(green>255){ green=255; } if(blue>255){ blue=255; } int realPos = 4*(y*imgRef.size.width + x); results[realPos] = red; results[realPos + 1] = green; results[realPos + 2] = blue; results[realPos + 3] = 1; }else { int realPos = 4*((y)*(imgRef.size.width) + (x)); results[realPos] = 0; results[realPos + 1] = 0; results[realPos + 2] = 0; results[realPos + 3] = 1; } } } context = CGBitmapContextCreate(results, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); CGImageRef finalImage = CGBitmapContextCreateImage(context); UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)]; CGImageRelease(finalImage); free(results); free(pixels); CGColorSpaceRelease(colorSpace); return newImage;} THANKS!!!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >