Search Results

Search found 18665 results on 747 pages for 'inside red gate'.

Page 132/747 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Why is this ajax call being made even though it shouldn't be

    - by user2921557
    i have this validation script im working on but cant see why im having an issue. You can see that i have a check = false / true, check before it runs the ajax call. However, even if a field is empty and check is set to false, it is still running the call. so: // JavaScript - Update Password AJAX $(document).ready(function () { // When the form is submitted $('.updatepasswordform').submit(function () { var check = true; // Get the values var password1 = $("input[name=password1]").val(); var password2 = $("input[name=password2]").val(); var newpassword = $("input[name=newpassword]").val(); /* Password Validation */ // If fields are empty if (password1 === '') { check = false; $("input[name=password1]").css('border', 'solid 2px red'); } // If fields are empty if (password2 === '') { check = false; $("input[name=password2]").css('border', 'solid 2px red'); } // If fields are empty if (newpassword === '') { check = false; $("input[name=newpassword]").css('border', 'solid 2px red'); } if (check = true) { $.ajax({ type: "POST", url: "process/updatepassword.php", data: $(".updatepasswordform").serialize(), dataType: "json", success: function (response) { /* Checks for database validation, removed for space saving */ } }); } return false; }); });

    Read the article

  • Ajax comments form in ASP.NET MVC2

    - by Artiom Chilaru
    I've been playing around with different aspects of MVC for some time now, and I've reached a situation where I'm not sure what would be the best way to solve a problem. I'm hoping that the SO community will help me out here :P I've seen a number of examples of Ajax.BeginForm on the internet, and it seems like a very nifty idea. E.g. you have a dropdown where you select a customer - and on selecting one it will load this client's details in some placeholder on the page. This works perfectly fine. But what to do if you want to tie in some validation in the box? Just hypothetically, imagine an article page, and user comments in the bottom. Below the comments area there's an ajax-y "Add comment" box. When a user adds a comment, it will appear in the comments area, below the last comment there. If I set the Ajax.BeginForm to Append the result of the call to the Comments area, it will work fine. But what if the data posted is not valid? Instead of appending a "successful" comment to the comments area I have to show the user validation errors. At this point I decided that the area INSIDE the Ajax.BeginForm will be inside a partial, and the form's submits will return this partial. Validation works fine. On each submit we reload the contents inside the form element. But how to add the successful comment to the top? Other things to consider: The comment form also has a "Preview" button. When the user clicks on Preview, I should load the rendered comment into a preview box. This will probably be inside the form area as well. I was thinking of using Json results instead. When the user submits the form, the server code will generate a Json object with a Success value, and html rendered partials as some properties. Something like { "success": true, "form": "<html form data>", "comment": "successful comment html to inject into the page" } This would be a perfect solution, except there's no way in MVC to render a partial into a string, inside the controller (separation of context, remember?). So.. what should I do then? Any "correct" way to implement this?

    Read the article

  • Ajax comments form in ASP.NET MVC2, howto?

    - by Artiom Chilaru
    I've been playing around with different aspects of MVC for some time now, and I've reached a situation where I'm not sure what would be the best way to solve a problem. I'm hoping that the SO community will help me out here :P I've seen a number of examples of Ajax.BeginForm on the internet, and it seems like a very nifty idea. E.g. you have a dropdown where you select a customer - and on selecting one it will load this client's details in some placeholder on the page. This works perfectly fine. But what to do if you want to tie in some validation in the box? Just hypothetically, imagine an article page, and user comments in the bottom. Below the comments area there's an ajax-y "Add comment" box. When a user adds a comment, it will appear in the comments area, below the last comment there. If I set the Ajax.BeginForm to Append the result of the call to the Comments area, it will work fine. But what if the data posted is not valid? Instead of appending a "successful" comment to the comments area I have to show the user validation errors. At this point I decided that the area INSIDE the Ajax.BeginForm will be inside a partial, and the form's submits will return this partial. Validation works fine. On each submit we reload the contents inside the form element. But how to add the successful comment to the top? Other things to consider: The comment form also has a "Preview" button. When the user clicks on Preview, I should load the rendered comment into a preview box. This will probably be inside the form area as well. I was thinking of using Json results instead. When the user submits the form, the server code will generate a Json object with a Success value, and html rendered partials as some properties. Something like { "success": true, "form": "<html form data>", "comment": "successful comment html to inject into the page" } This would be a perfect solution, except there's no way in MVC to render a partial into a string, inside the controller (separation of context, remember?). So.. what should I do then? Any "correct" way to implement this? Anyone???

    Read the article

  • Routed Command Question

    - by Andrew
    I'd like to implement a custom command to capture a Backspace key gesture inside of a textbox, but I don't know how. I wrote a test program in order to understand what's going on, but the behaviour of the program is rather confusing. Basically, I just need to be able to handle the Backspace key gesture via wpf commands while keyboard focus is in the textbox, and without disrupting the normal behaviour of the Backspace key within the textbox. Here's the xaml for the main window and the corresponding code-behind, too (note that I created a second command for the Enter key, just to compare its behaviour to that of the Backspace key): <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <TextBox Margin="44,54,44,128" Name="textBox1" /> </Grid> </Window> And here's the corresponding code-behind: using System.Windows; using System.Windows.Input; namespace WpfApplication1 { /// <summary> /// Interaction logic for EntryListView.xaml /// </summary> public partial class Window1 : Window { public static RoutedCommand EnterCommand = new RoutedCommand(); public static RoutedCommand BackspaceCommand = new RoutedCommand(); public Window1() { InitializeComponent(); CommandBinding cb1 = new CommandBinding(EnterCommand, EnterExecuted, EnterCanExecute); CommandBinding cb2 = new CommandBinding(BackspaceCommand, BackspaceExecuted, BackspaceCanExecute); this.CommandBindings.Add(cb1); this.CommandBindings.Add(cb2); KeyGesture kg1 = new KeyGesture(Key.Enter); KeyGesture kg2 = new KeyGesture(Key.Back); InputBinding ib1 = new InputBinding(EnterCommand, kg1); InputBinding ib2 = new InputBinding(BackspaceCommand, kg2); this.InputBindings.Add(ib1); this.InputBindings.Add(ib2); } #region Command Handlers private void EnterCanExecute(object sender, CanExecuteRoutedEventArgs e) { MessageBox.Show("Inside EnterCanExecute Method."); e.CanExecute = true; } private void EnterExecuted(object sender, ExecutedRoutedEventArgs e) { MessageBox.Show("Inside EnterExecuted Method."); e.Handled = true; } private void BackspaceCanExecute(object sender, CanExecuteRoutedEventArgs e) { MessageBox.Show("Inside BackspaceCanExecute Method."); e.Handled = true; } private void BackspaceExecuted(object sender, ExecutedRoutedEventArgs e) { MessageBox.Show("Inside BackspaceExecuted Method."); e.Handled = true; } #endregion Command Handlers } } Any help would be very much appreciated. Thanks! Andrew

    Read the article

  • Local and global variables in javascript

    - by caramel1991
    Today,I started to code a page that prompt the user to choose their PC spec,and the code is as follow <html> <title>Computer Specification Chooser</title> <head> <script type="text/javascript"> var compSpec = document.compChooser; function processorUnavailable_onclick() { alert("Sorry that processor speed is currently unavailable"); compSpec.processor[2].checked = true; } </script> </head> <body> <form name="compChooser"> <p>Tick all components you wan included on your computer</p> <p> DVD-ROM <input type="checkbox" name="chkDVD" value="DVD-ROM" /> <br /> CD-ROM <input type="checkbox" name="chkCD" value="CD-ROM" /> <br /> Zip Drive <input type="checkbox" name="chkZIP" value="ZIP DRIVE" /> </p> <p> Select the processor speed you require <br /> <input type="radio" name="processor" value="3.8" /> 3.8 GHZ <input type="radio" name="processor" value="4.8" onclick="processorUnavailable_onclick()" /> 4.8 GHZ <input type="radio" name="processor" value="6" /> 6 GHZ </p> <input type="button" name="btnCheck" value="Check Form" /> </form> </body> </html> The problem I'm facing is on the function that I've tie to the event handler,when I try to choose the radio button of the processor value 4.8 GHZ,yes it alert me with the message inside the function,but after that,it doest not execute the next statement inside the function,that is to check the next processor value 6 GHZ. I've try my effort to change it and test on it,and find out when I set the var compSpec = document.compChooser as a local variable inside the function instead of a global variable,the next statement could be executed. But I thought for a global variable,it is accessible in everywhere on the page and also inside a function.But why now I can't accesses it inside my function??Any idea?? Besides,I stumble upon a weird article while googling,it says that when a global variable is created,it is added to window object.I just curious why this happen??And what's the benefits and uses of it?? THANK YOU

    Read the article

  • Quetion regarding local and global variable in javascript

    - by caramel1991
    Today,I started to code a page that prompt the user to choose their PC spec,and the code is as follow <html> <title>Computer Specification Chooser</title> <head> <script type="text/javascript"> var compSpec = document.compChooser; function processorUnavailable_onclick() { alert("Sorry that processor speed is currently unavailable"); compSpec.processor[2].checked = true; } </script> </head> <body> <form name="compChooser"> <p>Tick all components you wan included on your computer</p> <p> DVD-ROM <input type="checkbox" name="chkDVD" value="DVD-ROM" /> <br /> CD-ROM <input type="checkbox" name="chkCD" value="CD-ROM" /> <br /> Zip Drive <input type="checkbox" name="chkZIP" value="ZIP DRIVE" /> </p> <p> Select the processor speed you require <br /> <input type="radio" name="processor" value="3.8" /> 3.8 GHZ <input type="radio" name="processor" value="4.8" onclick="processorUnavailable_onclick()" /> 4.8 GHZ <input type="radio" name="processor" value="6" /> 6 GHZ </p> <input type="button" name="btnCheck" value="Check Form" /> </form> </body> </html> The problem I'm facing is on the function that I've tie to the event handler,when I try to choose the radio button of the processor value 4.8 GHZ,yes it alert me with the message inside the function,but after that,it doest not execute the next statement inside the function,that is to check the next processor value 6 GHZ,I've try my effort to change it and test on it,and find out when I set the var compSpec = document.compChooser as a local variable inside the function instead of a global variable,the next statement could be executed.But I thought for a global variable,it is accessible in everywhere on the page and also inside a function.But why now I can't accesses it inside my function??Any idea??Besides,I stumble upon a weird article while googling,it says that when a global variable is created,it is added to window object.I just curious why this happen??And what's the benefits and uses of it??THANK YOU

    Read the article

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • calculix data visualizor using QT

    - by Ann
    include "final1.h" include "ui_final1.h" include include include ifndef GL_MULTISAMPLE define GL_MULTISAMPLE 0x809D endif define numred 100 define numgrn 10 define numblu 6 final1::final1(QWidget *parent) : QGLWidget(parent) { setFormat(QGLFormat(QGL::SampleBuffers)); rotationX = -38.0; rotationY = -58.0; rotationZ = 0.0; scaling = .05; // glPolygonMode(GL_FRONT_AND_BACK,GL_FILL); //createGradient(); createGLObject(); } final1::~final1() { makeCurrent(); glDeleteLists(glObject, 1); } void final1::paintEvent(QPaintEvent * /* event */) { QPainter painter(this); draw(); } void final1::mousePressEvent(QMouseEvent *event) { lastPos = event-pos(); } void final1::mouseMoveEvent(QMouseEvent *event) { GLfloat dx = GLfloat(event-x() - lastPos.x()) / width(); GLfloat dy = GLfloat(event-y() - lastPos.y()) / height(); if (event->buttons() & Qt::LeftButton) { rotationX += 180 * dy; rotationY += 180 * dx; update(); } else if (event->buttons() & Qt::RightButton) { rotationX += 180 * dy; rotationZ += 180 * dx; update(); } lastPos = event->pos(); } void final1::createGLObject() { makeCurrent(); GLfloat f1[150],f2[150],f3[150],length=0; qreal size=2; int k=1,a,b,c,d,e,f,g,h,element_node_no=0; GLfloat x,y,z; QString str1,str2,str3,str4,str5,str6,str7,str8; int red,green,blue,index=1,displacement; int LUT[1000][3]; for(red=100;red glShadeModel(GL_SMOOTH); glObject = glGenLists(1); glNewList(glObject, GL_COMPILE); // qglColor(QColor(255, 239, 191)); glLineWidth(1.0); QLinearGradient linearGradient(0, 0, 100, 100); linearGradient.setColorAt(0.0, Qt::red); linearGradient.setColorAt(0.2, Qt::green); linearGradient.setColorAt(1.0, Qt::black); //renderArea->setBrush(linearGradient); //glColor3f(1,0,0);pow((f1[e]-f1[a]),2) QFile file("/home/41407/input1.txt"); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; QTextStream in(&file); while (!in.atEnd()) { QString line = in.readLine(); if(k<=125) { str1= line.section(',', 1, 1); str2=line.section(',', 2, 2); str3=line.section(',', 3, 3); x=str1.toFloat(); y=str2.toFloat(); z=str3.toFloat(); f1[k]=x; f2[k]=y; f3[k]=z; /* glBegin(GL_TRIANGLES); // glColor3f(LUT[k][0],LUT[k][1],LUT[k][2]); //QColorAt();//setPointSize(size); glVertex3f(x,y,z); glEnd();*/ } else if(k>125) { element_node_no=0; qCount(line.begin(),line.end(),',',element_node_no); // printf("\n%d",element_node_no); str1= line.section(',', 1, 1); str2=line.section(',', 2, 2); str3=line.section(',', 3, 3); str4= line.section(',', 4, 4); str5=line.section(',', 5, 5); str6=line.section(',', 6, 6); str7= line.section(',', 7, 7); str8=line.section(',', 8, 8); a=str1.toInt(); b=str2.toInt(); c=str3.toInt(); d=str4.toInt(); e=str5.toInt(); f=str6.toInt(); g=str7.toInt(); h=str8.toInt(); glBegin(GL_POLYGON); glPolygonMode(GL_FRONT_AND_BACK,GL_FILL); //brush.setColor(Qt::black);//setColor(QColor::black()); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); // pmp.setBrush(gradient); glVertex3f(f1[a],f2[a] ,f3[a]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[b],f2[b] ,f3[b]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[c],f2[c] ,f3[c]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[d],f2[d] ,f3[d]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[a],f2[a] ,f3[a]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); //glEnd(); //glBegin(GL_LINE_LOOP); glVertex3f(f1[e],f2[e] ,f3[e]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[f],f2[f] ,f3[f]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[g],f2[g], f3[g]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[h],f2[h], f3[h]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[d],f2[d] ,f3[d]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[a],f2[a] ,f3[a]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glEnd(); glBegin(GL_POLYGON); //glVertex3f(f1[a],f2[a] ,f3[a]); glVertex3f(f1[e],f2[e] ,f3[e]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[h],f2[h], f3[h]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); //glVertex3f(f1[d],f2[d] ,f3[d]); glVertex3f(f1[g],f2[g], f3[g]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[c],f2[c] ,f3[c]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[f],f2[f] ,f3[f]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glVertex3f(f1[b],f2[b] ,f3[b]); glColor3f(LUT[k][0],LUT[k][1],LUT[k++][2]); glEnd(); /*length=sqrt(pow((f1[e]-f1[a]),2)+pow((f2[e]-f2[a]),2)+pow((f3[e]-f3[a]),2)); printf("\n%d",length);*/ } k++; } glEndList(); file.close(); k=1; QFile file1("/home/41407/op.txt"); if (!file1.open(QIODevice::ReadOnly | QIODevice::Text)) return; QTextStream in1(&file1); k=1; while (!in1.atEnd()) { QString line = in1.readLine(); // if(k<=125) { str1= line.section(' ', 1, 1); x=str1.toFloat(); str2=line.section(' ', 2, 2); y=str2.toFloat(); str3=line.section(' ', 3, 3); z=str3.toFloat(); displacement=sqrt(pow( (x-f1[k]),2)+pow((y-f2[k]),2)+pow((z-f3[k]),2)); //printf("\n %d : %d",k,displacement); glBegin(GL_POLYGON); //glColor3f(LUT[displacement][0],LUT[displacement][1],LUT[displacement][2]); glVertex3f(f1[k],f2[k],f3[k]); glEnd(); a1[k]=x+f1[k]; a2[k]=y+f2[k]; a3[k]=z+f3[k]; //printf("\nc: %f %f %f",x,y,z); //printf("\nf: %f %f %f",f1[k],f2[k],f3[k]); //printf("\na: %f %f %f",a1[k],a2[k],a3[k]); } k++; glEndList(); } } void final1::draw() { glPushAttrib(GL_ALL_ATTRIB_BITS); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); GLfloat x = 3.0 * GLfloat(width()) / height(); glOrtho(-x, +x, -3.0, +3.0, 4.0, 15.0); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glScalef(scaling, scaling, scaling); glRotatef(rotationX, 1.0, 0.0, 0.0); glRotatef(rotationY, 0.0, 1.0, 0.0); glRotatef(rotationZ, 0.0, 0.0, 1.0); glEnable(GL_MULTISAMPLE); glCallList(glObject); glMatrixMode(GL_MODELVIEW); glPopMatrix(); glMatrixMode(GL_PROJECTION); glPopMatrix(); glPopAttrib(); } /*uint final1::colorAt(int x) { generateShade(); QPolygonF pts = m_hoverPoints->points(); for (int i=1; i < pts.size(); ++i) { if (pts.at(i-1).x() <= x && pts.at(i).x() >= x) { QLineF l(pts.at(i-1), pts.at(i)); l.setLength(l.length() * ((x - l.x1()) / l.dx())); return m_shade.pixel(qRound(qMin(l.x2(), (qreal(m_shade.width() - 1)))), qRound(qMin(l.y2(), qreal(m_shade.height() - 1)))); } } return 0;*/ //final1:: //} /*void final1::createGLObject() { makeCurrent(); //QPainter painter; QPixmap pm(20, 20); QPainter pmp(&pm); pmp.fillRect(0, 0, 10, 10, Qt::blue); pmp.fillRect(10, 10, 10, 10, Qt::lightGray); pmp.fillRect(0, 10, 10, 10, Qt::darkGray); pmp.fillRect(10, 0, 10, 10, Qt::darkGray); pmp.end(); QPalette pal = palette(); pal.setBrush(backgroundRole(), QBrush(pm)); //setAutoFillBackground(true); setPalette(pal); //GLfloat f1[150],f2[150],f3[150],a1[150],a2[150],a3[150]; int k=1,a,b,c,d,e,f,g,h; //int p=0; GLfloat x,y,z; int displacement; QString str1,str2,str3,str4,str5,str6,str7,str8; int red,green,blue,index=1; int LUT[8000][3]; for(red=0;red //glShadeModel(GL_LINE); glObject = glGenLists(1); glNewList(glObject, GL_COMPILE); //qglColor(QColor(120,255,210)); glLineWidth(1.0); //glColor3f(1,0,0); QFile file("/home/41407/input.txt"); if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) return; QTextStream in(&file); while (!in.atEnd()) { //glColor3f(LUT[k][0],LUT[k][1],LUT[k][2]); QString line = in.readLine(); if(k<=125) { //printf("\nline :%c",line); str1= line.section(',', 1, 1); str2=line.section(',', 2, 2); str3=line.section(',', 3, 3); x=str1.toFloat(); y=str2.toFloat(); z=str3.toFloat(); f1[k]=x; f2[k]=y; f3[k]=z; //printf("\nf: %f %f %f",f1[k],f2[k],f3[k]); } else if(k125) //for(p=0;p<6;p++) { //glColor3f(LUT[k][0],LUT[k][1],LUT[k][2]); update(); str1= line.section(',', 1, 1); str2=line.section(',', 2, 2); str3=line.section(',', 3, 3); str4= line.section(',', 4, 4); str5=line.section(',', 5, 5); str6=line.section(',', 6, 6); str7= line.section(',', 7, 7); str8=line.section(',', 8, 8); a=str1.toInt(); b=str2.toInt(); c=str3.toInt(); d=str4.toInt(); e=str5.toInt(); f=str6.toInt(); g=str7.toInt(); h=str8.toInt(); //for (p = 0; p < 6; p++) { // glBegin(GL_LINE_WIDTH); //glColor3f(LUT[126][0],LUT[126][1],LUT[126][2]); //update(); //glNormal3fv(&n[p][0]); //glVertex3f(f1[i],f2[i],f3[i]); glVertex3fv(&v[faces[i][1]][0]); glVertex3fv(&v[faces[i][2]][0]); glVertex3fv(&v[faces[i][3]][0]); //glEnd(); //} glBegin(GL_LINE_LOOP); //glColor3f(p*20,p*20,p); glColor3f(1,0,0); glVertex3f(f1[a],f2[a] ,f3[a]); //painter.fillRect(QRectF(f1[a],f2[a] ,f3[a], 2), Qt::magenta); glVertex3f(f1[b],f2[b] ,f3[b]); glVertex3f(f1[c],f2[c] ,f3[c]); glVertex3f(f1[d],f2[d] ,f3[d]); glVertex3f(f1[a],f2[a] ,f3[a]); glVertex3f(f1[e],f2[e] ,f3[e]); glVertex3f(f1[f],f2[f] ,f3[f]); glVertex3f(f1[g],f2[g], f3[g]); glVertex3f(f1[h],f2[h], f3[h]); glVertex3f(f1[d],f2[d] ,f3[d]); glVertex3f(f1[a],f2[a] ,f3[a]); //glColor3f(1,0,0); //QLinearGradient ( f1[a], f2[a], f1[b], f2[b] ); glEnd(); glBegin(GL_LINES); //glNormal3fv(&n[p][0]); //glColor3f(LUT[k][0],LUT[k][1],LUT[k][2]); glVertex3f(f1[e],f2[e] ,f3[e]); glVertex3f(f1[h],f2[h], f3[h]); glVertex3f(f1[g],f2[g], f3[g]); glVertex3f(f1[c],f2[c] ,f3[c]); glVertex3f(f1[f],f2[f] ,f3[f]); glVertex3f(f1[b],f2[b] ,f3[b]); glEnd(); } } k++; } glEndList(); qglColor(QColor(239, 255, 191)); glLineWidth(1.0); glColor3f(0,1,0); k=1; QFile file1("/home/41407/op.txt"); if (!file1.open(QIODevice::ReadOnly | QIODevice::Text)) return; QTextStream in1(&file1); k=1; while (!in1.atEnd()) { QString line = in1.readLine(); // if(k<=125) { str1= line.section(' ', 1, 1); x=str1.toFloat(); str2=line.section(' ', 2, 2); y=str2.toFloat(); str3=line.section(' ', 3, 3); z=str3.toFloat(); displacement=sqrt(pow( (x-f1[k]),2)+pow((y-f2[k]),2)+pow((z-f3[k]),2)); printf("\n %d : %d",k,displacement); glBegin(GL_POINT); glColor3f(LUT[displacement][0],LUT[displacement][1],LUT[displacement][2]); glVertex3f(x,y,z); glLoadIdentity(); glEnd(); a1[k]=x+f1[k]; a2[k]=y+f2[k]; a3[k]=z+f3[k]; //printf("\nc: %f %f %f",x,y,z); //printf("\nf: %f %f %f",f1[k],f2[k],f3[k]); //printf("\na: %f %f %f",a1[k],a2[k],a3[k]); } k++; glEndList(); } }*/ /*void final1::draw() { glPushAttrib(GL_ALL_ATTRIB_BITS); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); GLfloat x = 3.0 * GLfloat(width()) / height(); glOrtho(-x, +x, -3.0, +3.0, 4.0, 15.0); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glTranslatef(0.0, 0.0, -10.0); glScalef(scaling, scaling, scaling); glRotatef(rotationX, 1.0, 0.0, 0.0); glRotatef(rotationY, 0.0, 1.0, 0.0); glRotatef(rotationZ, 0.0, 0.0, 1.0); glEnable(GL_MULTISAMPLE); glCallList(glObject); glMatrixMode(GL_MODELVIEW); glPopMatrix(); glMatrixMode(GL_PROJECTION); glPopMatrix(); glPopAttrib(); }*/ I need to change the color of a portion of beam where pressure is applied.But I am not able to color the front end back phase.

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • Anatomy of a serialization killer

    - by Brian Donahue
    As I had mentioned last month, I have been working on a project to create an easy-to-use managed debugger. It's still an internal tool that we use at Red Gate as part of product support to analyze application errors on customer's computers, and as such, should be easy to use and not require installation. Since the project has got rather large and important, I had decided to use SmartAssembly to protect all of my hard work. This was trivial for the most part, but the loading and saving of results was broken by SA after using the obfuscation, rendering the loading and saving of XML results basically useless, although the merging and error reporting was an absolute godsend and definitely worth the price of admission. (Well, I get my Red Gate licenses for free, but you know what I mean!)My initial reaction was to simply exclude the serializable results class and all of its' members from obfuscation, and that was just dandy, but a few weeks on I decided to look into exactly why serialization had broken and change the code to work with SA so I could write any new code to be compatible with SmartAssembly and save me some additional testing and changes to the SA project.In simple terms, SA does all that it can to prevent serialization problems, for instance, it will not obfuscate public members of a DLL and it will exclude any types with the Serializable attribute from obfuscation. This prevents public members and properties from being made private and having the name changed. If the serialization is done inside the executable, however, public members have the access changed to private and are renamed. That was my first problem, because my types were in the executable assembly and implemented ISerializable, but did not have the Serializable attribute set on them!public class RedFlagResults : ISerializable        {        }The second problem caused by the pruning feature. Although RedFlagResults had public members, they were not truly properties, and used the GetObjectData() method of ISerializable to serialize the members. For that reason, SA could not exclude these members from pruning and further broke the serialization. public class RedFlagResults : ISerializable        {                public List<RedFlag.Exception> Exceptions;                 #region ISerializable Members                 public void GetObjectData(SerializationInfo info, StreamingContext context)                {                                info.AddValue("Exceptions", Exceptions);                }                 #endregionSo to fix this, it was necessary to make Exceptions a proper property by implementing get and set on it. Also, I added the Serializable attribute so that I don't have to exclude the class from obfuscation in the SA project any more. The DoNotPrune attribute means I do not need to exclude the class from pruning.[Serializable, SmartAssembly.Attributes.DoNotPrune]        public class RedFlagResults        {                public List<RedFlag.Exception> Exceptions {get;set;}        }Similarly, the Exception class gets the Serializable and DoNotPrune attributes applied so all of its' properties are excluded from obfuscation.Now my project has some protection from prying eyes by scrambling up the code so it's harder to reverse-engineer, without breaking anything. SmartAssembly has also provided the benefit of merging so that the end-user doesn't need to extract all of the DLL files needed by RedFlag into a directory, and can be run directly from the .zip archive. When an error occurs (hey, I'm only human!), an exception report can be sent to me so I can see what went wrong without having to, er, debug the debugger.

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • Project structure: where to put business logic

    - by Mister Smith
    First of all, I'm not asking where does business logic belong. This has been asked before and most answers I've read agree in that it belongs in the model: Where to put business logic in MVC design? How much business logic should be allowed to exist in the controller layer? How accurate is "Business logic should be in a service, not in a model"? Why put the business logic in the model? What happens when I have multiple types of storage? However people disagree in the way this logic should be distributed across classes. There seem to exist three major currents of thought: Fat model with business logic inside entity classes. Anemic model and business logic in "Service" classes. It depends. I find all of them problematic. The first option is what most Fowlerites stick to. The problem with a fat model is that sometimes a business logic funtion is not only related to a class, and instead uses a bunch of other classes. If, for example, we are developing a web store, there should be a function that calcs an order's total. We could think of putting this function inside the Order class, but what actually happens is that the logic needs to use different classes, not only data contained in the Order class, but also in the User class, the Session class, and maybe the Tax class, Country class, or Giftcard, Payment, etc. Some of these classes could be composed inside the Order class, but some others not. Sorry if the example is not very good, but I hope you understand what I mean. Putting such a function inside the Order class would break the single responsibility principle, adding unnecesary dependences. The business logic would be scattered across entity classes, making it hard to find. The second option is the one I usually follow, but after many projects I'm still in doubt about how to name the class or classes holding the business logic. In my company we usually develop apps with offline capabilities. The user is able to perform entire transactions offline, so all validation and business rules should be implemented in the client, and then there's usually a background thread that syncs with the server. So we usually have the following classes/packages in every project: Data model (DTOs) Data Access Layer (Persistence) Web Services layer (Usually one class per WS, and one method per WS method). Now for the business logic, what is the standard approach? A single class holding all the logic? Multiple classes? (if so, what criteria is used to distribute the logic across them?). And how should we name them? FooManager? FooService? (I know the last one is common, but in our case it is bad naming because the WS layer usually has classes named FooWebService). The third option is probably the right one, but it is also devoid of any useful info. To sum up: I don't like the first approach, but I accept that I might have been unable to fully understand the Zen of it. So if you advocate for fat models as the only and universal solution you are welcome to post links explaining how to do it the right way. I'd like to know what is the standard design and naming conventions for the second approach in OO languages. Class names and package structure, in particular. It would also be helpful too if you could include links to Open Source projects showing how it is done. Thanks in advance.

    Read the article

  • emo-framework in android move on collision of sprites with physics

    - by KaHeL
    I'm developing my first ever game for Android where I'm still learning about using of framework. To begin I made two sprites of ball where one ball is movable by dragging and another one is just standing on it's place on load. Now I've already added the collision listener for both sprites and as tested it's working properly. Now what I need to learn is on how can I add physics on both sprites where when they collide the standing sprite will move based on the physics and bounce around the screen. It would be best if you teach it to me step by step since I'm a little slow on this. Here's my nut so far: local stage = emo.Stage(); class Okay_1 { sprite = null; spriteok = null; dragStart = false; angle = 0; // Called when the stage is loaded function onLoad() { print("Level_1 is loaded!"); // Create new sprite and load 'f1.png' sprite = emo.Sprite("f1.png"); sprite.moveCenter(stage.getWindowWidth() * 0.5, stage.getWindowHeight() * 0.5); sprite.load(); spriteok = emo.Sprite("okay.png") spriteok.setWidth(100); spriteok.setHeight(100); spriteok.load(); // Check if the coordinate (X=100, Y=100) is inside the sprite if (spriteok.contains(100, 100)) { print("contains!"); } // Does the sprite collides with the other sprite? if (spriteok.collidesWith(sprite)) { print("collides!"); } } function onMotionEvent(ev) { if (ev.getAction() == MOTION_EVENT_ACTION_DOWN) { // Moves the sprite at the position of motion event angle = sprite.getAngle(); sprite.remove(); sprite = emo.Sprite("f2.png"); sprite.load(); sprite.rotate(angle); sprite.moveCenter(ev.getX(), ev.getY()); sprite.rotate(sprite.getAngle()+10); // Check if the coordinate (X=100, Y=100) is inside the sprite if (sprite.contains(sprite.getWidth(), sprite.getHeight())) { print("contains!"); } // Does the sprite collides with the other sprite? if (sprite.collidesWith(spriteok)) { print("collides!"); } dragStart = true; }else if (ev.getAction() == MOTION_EVENT_ACTION_MOVE) { if (dragStart) { // Moves the sprite at the position of motion event sprite.moveCenter(ev.getX(), ev.getY()); sprite.rotate(sprite.getAngle()+10); // Check if the coordinate (X=100, Y=100) is inside the sprite if (sprite.contains(sprite.getWidth(), sprite.getHeight())) { print("contains!"); } // Does the sprite collides with the other sprite? if (sprite.collidesWith(spriteok)) { print("collides!"); } } }else if (ev.getAction() == MOTION_EVENT_ACTION_UP || ev.getAction() == MOTION_EVENT_ACTION_CANCEL) { if (dragStart) { // change block color to red dragStart = false; angle = sprite.getAngle(); sprite.remove(); sprite = emo.Sprite("f1.png"); sprite.load(); sprite.moveCenter(ev.getX(), ev.getY()); sprite.rotate(angle); // Check if the coordinate (X=100, Y=100) is inside the sprite if (sprite.contains(sprite.getWidth(), sprite.getHeight())) { print("contains!"); } // Does the sprite collides with the other sprite? if (sprite.collidesWith(spriteok)) { print("collides!"); } } } } // Called when the stage is disposed function onDispose() { sprite.remove(); // Remove the sprite print("Level_1 is disposed!"); } } function emo::onLoad() { emo.Stage().load(Okay_1()); }

    Read the article

  • Thoughts on my new template language/HTML generator?

    - by Ralph
    I guess I should have pre-faced this with: Yes, I know there is no need for a new templating language, but I want to make a new one anyway, because I'm a fool. That aside, how can I improve my language: Let's start with an example: using "html5" using "extratags" html { head { title "Ordering Notice" jsinclude "jquery.js" } body { h1 "Ordering Notice" p "Dear @name," p "Thanks for placing your order with @company. It's scheduled to ship on {@ship_date|dateformat}." p "Here are the items you've ordered:" table { tr { th "name" th "price" } for(@item in @item_list) { tr { td @item.name td @item.price } } } if(@ordered_warranty) p "Your warranty information will be included in the packaging." p(class="footer") { "Sincerely," br @company } } } The "using" keyword indicates which tags to use. "html5" might include all the html5 standard tags, but your tags names wouldn't have to be based on their HTML counter-parts at all if you didn't want to. The "extratags" library for example might add an extra tag, called "jsinclude" which gets replaced with something like <script type="text/javascript" src="@content"></script> Tags can be optionally be followed by an opening brace. They will automatically be closed at the closing brace. If no brace is used, they will be closed after taking one element. Variables are prefixed with the @ symbol. They may be used inside double-quoted strings. I think I'll use single-quotes to indicate "no variable substitution" like PHP does. Filter functions can be applied to variables like @variable|filter. Arguments can be passed to the filter @variable|filter:@arg1,arg2="y" Attributes can be passed to tags by including them in (), like p(class="classname"). You will also be able to include partial templates like: for(@item in @item_list) include("item_partial", item=@item) Something like that I'm thinking. The first argument will be the name of the template file, and subsequent ones will be named arguments where @item gets the variable name "item" inside that template. I also want to have a collection version like RoR has, so you don't even have to write the loop. Thoughts on this and exact syntax would be helpful :) Some questions: Which symbol should I use to prefix variables? @ (like Razor), $ (like PHP), or something else? Should the @ symbol be necessary in "for" and "if" statements? It's kind of implied that those are variables. Tags and controls (like if,for) presently have the exact same syntax. Should I do something to differentiate the two? If so, what? This would make it more clear that the "tag" isn't behaving like just a normal tag that will get replaced with content, but controls the flow. Also, it would allow name-reuse. Do you like the attribute syntax? (round brackets) How should I do template inheritance/layouts? In Django, the first line of the file has to include the layout file, and then you delimit blocks of code which get stuffed into that layout. In CakePHP, it's kind of backwards, you specify the layout in the controller.view function, the layout gets a special $content_for_layout variable, and then the entire template gets stuffed into that, and you don't need to delimit any blocks of code. I guess Django's is a little more powerful because you can have multiple code blocks, but it makes your templates more verbose... trying to decide what approach to take Filtered variables inside quotes: "xxx {@var|filter} yyy" "xxx @{var|filter} yyy" "xxx @var|filter yyy" i.e, @ inside, @ outside, or no braces at all. I think no-braces might cause problems, especially when you try adding arguments, like @var|filter:arg="x", then the quotes would get confused. But perhaps a braceless version could work for when there are no quotes...? Still, which option for braces, first or second? I think the first one might be better because then we're consistent... the @ is always nudged up against the variable. I'll add more questions in a few minutes, once I get some feedback.

    Read the article

  • New spreadsheet accompanying SmartAssembly 6.0 provides statistics for prioritizing bug fixes

    - by Jason Crease
    One problem developers face is how to prioritize the many voices providing input into software bugs. If there is something wrong with a function that is the darling of a particular user, he or she tends to want action - now! The developer's dilemma is how to ascertain that the problem is major or minor, and when it should be addressed. Now there is a new spreadsheet accompanying SmartAssembly that provides exactly that information in an objective manner. This might upset those used to getting their way by being the loudest or pushiest, but ultimately it will ensure that the biggest problems get the priority they deserve. Here's how it works: Feature Usage Reporting (FUR) in SmartAssembly 6.0 provides a wealth of data about how your software is used by its end-users, but in the SmartAssembly UI the data isn't mined to its full extent. The new Excel spreadsheet for FUR extracts statistics from that data and presents them in easy-to-understand forms. I developed the spreadsheet feature in Microsoft Excel, using a fair amount of VBA. The spreadsheet connects directly to the database which stores the feature-usage data, and shows a wide variety of statistics and tables extracted from that data.  You want to know what percentage of users have used the 'Export as XML' button?  No problem.  How popular is v5.3 is compared to v5.1?  There's graphs for that. You need to know whether you have more users in Russia or Brazil? There's a big pie chart for that. I recently witnessed the spreadsheet in use here at Red Gate Software. My bug is exposed as minor While testing new features in .NET Reflector, I found a usability bug in the Refresh button and filed it in the Red Gate bug-tracking system. The bug was labelled "V.NEXT MINOR," which means it would be fixed in the next point release. Although I'm a professional tester, I'm not much different than most software users when they discover a bug that affects them personally: I wanted it fixed immediately. There was an ulterior motive at play here, of course. I would get to see my colleagues put the spreadsheet to work. The Reflector team loaded up the spreadsheet to view the feature-usage statistics that SmartAssembly collected for the refresh button. The resulting statistics showed that only 8% of users have ever pressed the Refresh button, and only 2.6% of sessions involve pressing the button. When Refresh is used, it's only pressed on average 1.6 times a session, with a maximum of 8 times during a session. This was in stark contrast to what I was doing as a conscientious tester: pressing it dozens of times per session. The spreadsheet provides evidence that my bug was a minor one. On to more serious things Based on the solid evidence uncovered by the spreadsheet, the Reflector team concluded that my experience does not represent that of the vast majority of Reflector's recorded users. The Reflector team had ample data to send me back to my desk and keep the bug classified as "V.NEXT MINOR." The team then went back to fixing more serious bugs. If I'm in the shoes of the user, I might not be thoroughly happy, but I cannot deny that the evidence clearly placed me in a very small minority. Next time I'm hoping the spreadsheet will prove that my bug is more important. Find out more about Feature-Usage Reporting here. The spreadsheet is available for free download here.

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • Writing a method to 'transform' an immutable object: how should I approach this?

    - by Prog
    (While this question has to do with a concrete coding dilemma, it's mostly about what's the best way to design a function.) I'm writing a method that should take two Color objects, and gradually transform the first Color into the second one, creating an animation. The method will be in a utility class. My problem is that Color is an immutable object. That means that I can't do color.setRGB or color.setBlue inside a loop in the method. What I can do, is instantiate a new Color and return it from the method. But then I won't be able to gradually change the color. So I thought of three possible solutions: 1- The client code includes the method call inside a loop. For example: int duration = 1500; // duration of the animation in milliseconds int steps = 20; // how many 'cycles' the animation will take for(int i=0; i<steps; i++) color = transformColor(color, targetColor, duration, steps); And the method would look like this: Color transformColor(Color original, Color target, int duration, int steps){ int redDiff = target.getRed() - original.getRed(); int redAddition = redDiff / steps; int newRed = original.getRed() + redAddition; // same for green and blue .. Thread.sleep(duration / STEPS); // exception handling omitted return new Color(newRed, newGreen, newBlue); } The disadvantage of this approach is that the client code has to "do part of the method's job" and include a for loop. The method doesn't do it's work entirely on it's own, which I don't like. 2- Make a mutable Color subclass with methods such as setRed, and pass objects of this class into transformColor. Then it could look something like this: void transformColor(MutableColor original, Color target, int duration){ final int STEPS = 20; int redDiff = target.getRed() - original.getRed(); int redAddition = redDiff / steps; int newRed = original.getRed() + redAddition; // same for green and blue .. for(int i=0; i<STEPS; i++){ original.setRed(original.getRed() + redAddition); // same for green and blue .. Thread.sleep(duration / STEPS); // exception handling omitted } } Then the calling code would usually look something like this: // The method will usually transform colors of JComponents JComponent someComponent = ... ; // setting the Color in JComponent to be a MutableColor Color mutableColor = new MutableColor(someComponent.getForeground()); someComponent.setForeground(mutableColor); // later, transforming the Color in the JComponent transformColor((MutableColor)someComponent.getForeground(), new Color(200,100,150), 2000); The disadvantage is - the need to create a new class MutableColor, and also the need to do casting. 3- Pass into the method the actual mutable object that holds the color. Then the method could do object.setColor or similar every iteration of the loop. Two disadvantages: A- Not so elegant. Passing in the object that holds the color just to transform the color feels unnatural. B- While most of the time this method will be used to transform colors inside JComponent objects, other kinds of objects may have colors too. So the method would need to be overloaded to receive other types, or receive Objects and have instanceof checks inside.. Not optimal. Right now I think I like solution #2 the most, than solution #1 and solution #3 the least. However I'd like to hear your opinions and suggestions regarding this.

    Read the article

  • Class Issue (The type XXX is already defined )

    - by Android Stack
    I have listview app exploring cities each row point to diffrent city , in each city activity include button when clicked open new activity which is infinite gallery contains pics of that city , i add infinite gallery to first city and work fine , when i want to add it to the second city , it gave me red mark error in the class as follow : 1- The type InfiniteGalleryAdapter is already defined. 2-The type InfiniteGallery is already defined. i tried to change class name with the same result ,i delete R.jave and eclipse rebuild it with same result also i uncheck the java builder from project properties ,get same red mark error. please any help and advice will be appreciated thanks My Code : public class SecondCity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); Boolean customTitleSupported = requestWindowFeature(Window.FEATURE_CUSTOM_TITLE); // Set the layout to use setContentView(R.layout.main); if (customTitleSupported) { getWindow().setFeatureInt(Window.FEATURE_CUSTOM_TITLE,R.layout.custom_title); TextView tv = (TextView) findViewById(R.id.tv); Typeface face=Typeface.createFromAsset(getAssets(),"BFantezy.ttf"); tv.setTypeface(face); tv.setText("MY PICTURES"); } InfiniteGallery galleryOne = (InfiniteGallery) findViewById(R.id.galleryOne); galleryOne.setAdapter(new InfiniteGalleryAdapter(this)); } } class InfiniteGalleryAdapter extends BaseAdapter { **//red mark here (InfiniteGalleryAdapter)** private Context mContext; public InfiniteGalleryAdapter(Context c, int[] imageIds) { this.mContext = c; } public int getCount() { return Integer.MAX_VALUE; } public Object getItem(int position) { return position; } public long getItemId(int position) { return position; } private LayoutInflater inflater=null; public InfiniteGalleryAdapter(Context a) { this.mContext = a; inflater = (LayoutInflater)mContext.getSystemService ( Context.LAYOUT_INFLATER_SERVICE); } public class ViewHolder{ public TextView text; public ImageView image; } private int[] images = { R.drawable.pic_1, R.drawable.pic_2, R.drawable.pic_3, R.drawable.pic_4, R.drawable.pic_5 }; private String[] name = { "This is first picture (1) " , "This is second picture (2)", "This is third picture (3)", "This is fourth picture (4)", " This is fifth picture (5)", }; public View getView(int position, View convertView, ViewGroup parent) { ImageView i = getImageView(); int itemPos = (position % images.length); try { i.setImageResource(images[itemPos]); ((BitmapDrawable) i.getDrawable()). setAntiAlias (true); } catch (OutOfMemoryError e) { Log.e("InfiniteGalleryAdapter", "Out of memory creating imageview. Using empty view.", e); } View vi=convertView; ViewHolder holder; if(convertView==null){ vi = inflater.inflate(R.layout.gallery_items, null); holder=new ViewHolder(); holder.text=(TextView)vi.findViewById(R.id.textView1); holder.image=(ImageView)vi.findViewById(R.id.image); vi.setTag(holder); } else holder=(ViewHolder)vi.getTag(); holder.text.setText(name[itemPos]); final int stub_id=images[itemPos]; holder.image.setImageResource(stub_id); return vi; } private ImageView getImageView() { ImageView i = new ImageView(mContext); return i; } } class InfiniteGallery extends Gallery { **//red mark here (InfiniteGallery)** public InfiniteGallery(Context context) { super(context); init(); } public InfiniteGallery(Context context, AttributeSet attrs) { super(context, attrs); init(); } public InfiniteGallery(Context context, AttributeSet attrs, int defStyle) { super(context, attrs, defStyle); init(); } private void init(){ // These are just to make it look pretty setSpacing(25); setHorizontalFadingEdgeEnabled(false); } public void setResourceImages(int[] name){ setAdapter(new InfiniteGalleryAdapter(getContext(), name)); setSelection((getCount() / 2)); } }

    Read the article

  • mysql backup and restore from ib_logfile failure

    - by i need help
    Here's the case: After computer being hacked, we are in a rush to backup all data out to other computer. As a result, the mysql databases are not backup out as sql statement. What we have done is backup out all the physical files/folders in the C drive to new computer. Eg: C:\Program Files\MySQL\MySQL Server 4.1\data In this case, all data for mysql are inside unreadable file. Inside data folder consist of files like ib_logfile0, ib_logfile1, but not ib_data1 All database's table structure format are inside each respective folder. (Some folder have .frm, .opt) (some other folder have .frm, .myd, .myi) How can I retrieve back the data from the database in a new computer? I tried to install the same mysql version(4.1) at new computer, then replace all backup files inside data folder into this mysql in new computer. Then restart mysql service. When I restart, it fail: Could not start mysql service on local computer. error 1067: process terminated unexpectedly. Error log showing: InnoDB: The first specified data file .\ibdata1 did not exist: InnoDB: a new database to be created! 090930 10:24:49 InnoDB: Setting file .\ibdata1 size to 10 MB InnoDB: Database physically writes the file full: wait... InnoDB: Error: log file .\ib_logfile0 is of different size 0 87031808 bytes InnoDB: than specified in the .cnf file 0 25165824 bytes! 090930 10:24:49 [ERROR] Can't init databases 090930 10:24:49 [ERROR] Aborting 090930 10:24:49 [Note] C:\Program Files\MySQL\MySQL Server 4.1\bin\mysqld-nt: Shutdown complete

    Read the article

  • How do I make kdump use a permissible range of memory for the crash kernel?

    - by Philip Durbin
    I've read the Red Hat Knowledgebase article "How do I configure kexec/kdump on Red Hat Enterprise Linux 5?" at http://kbase.redhat.com/faq/docs/DOC-6039 and http://prefetch.net/blog/index.php/2009/07/06/using-kdump-to-get-core-files-on-fedora-and-centos-hosts/ The crashkernel=128M@16M kernel parameter works fine for me in a RHEL 6.0 beta VM, but not on the RHEL 5.5 hosts I've tried. dmesg shows me: Memory for crash kernel (0x0 to 0x0) notwithin permissible range disabling kdump Here's the line from grub.conf: kernel /vmlinuz-2.6.18-194.3.1.el5 ro root=/dev/md2 console=ttyS0,115200 panic=15 rhgb quiet crashkernel=128M@16M How do I make kdump use a permissible range of memory for the crash kernel?

    Read the article

  • CentOS vs. Ubuntu

    - by DLH
    I had a web server that ran Ubuntu, but the hard drive failed recently and everything was erased. I decided to try CentOS on the machine instead of Ubuntu, since it's based on Red Hat. That association meant a lot to me because Red Hat is a commercial server product and is officially supported by my server's manufacturer. However, after a few days I'm starting to miss Ubuntu. I have trouble finding some of the packages I want in the CentOS repositories, and the third-party packages I've tried have been a hassle to deal with. My question is, what are the advantages of using CentOS as a server over Ubuntu? CentOS is ostensibly designed for this purpose, but so far I would prefer to use a desktop edition of Ubuntu over CentOS. Are there any killer features of CentOS which make it a better server OS? Is there any reason I shouldn't switch back to Ubuntu Server or Xubuntu?

    Read the article

  • setting rpmforge repository for Linux (RHEL)

    - by Ashish
    Hello, I had a Linux centos(5.5) machine, on this i had deployed amavisd (with clamav and spamassassin). Referred these: http://wiki.centos.org/HowTos/Amavisd http://wiki.centos.org/PackageManagement/Yum/Priorities Now I have a linux RHEL machine, details are as follows: (Linux version 2.6.18-164.6.1.el5 ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) x86_64 GNU/Linux Red Hat Enterprise Linux Server release 5.5 (Tikanga)) I want to set up the above mentioned software's on this(RHEL) machine, for that I do (as per the reference link): install yum-priorities but i am unable to install this on my said machine, because the default yum repository provided by RHEL doesn't contain this. How can i deploy the above software's on my RHEL machine, suggest any safe alternate. Please guide since i am a newbie in this matter. thanks in advance Ashish Sharma

    Read the article

  • Outlook DASL Filter - Custom Search

    - by Ryan B
    I'm trying to write a DASL filter to combine three queries: Get all mail with no category and no flag. ("urn:schemas-microsoft-com:office:office#Keywords" IS NULL AND "urn:schemas:httpmail:messageflag" IS NULL) Get all mail that is categorized as "Ryan" and flagged with a red "Today" flag. Don't know how to write this one. Get all mail that is uncategorized and flagged with a red "Today" flag. Don't know how to write this one. Once I have the individual queries, I will combine and OR them. I am stuck on how to filter the flag value.

    Read the article

  • iOS5: Confusion with loadview and init and instance variables

    - by user743550
    I'm new to iOS5 and storyboarding. I noticed that if I declare an instance variables inside my viewcontroller .h file and set the values inside my init of .m file of my viewcontroller, whe the view of the viewcontroller is displayed, my instance variables show null inside viewDidLoad. In order for me to get myvariables, I'd need to do [self init] inside viewDidLoad. My questions are: @interface tableViewController : UITableViewController { NSMutableArray *myvariable; } @end @implementation tableViewController -(id)init { myvariable = [[NSMutableArray alloc]initWithObjects:@"Hi2",@"Yo2",@"whatsup2", nil]; } - (void)viewDidLoad { NSLog(@"%@",myvariable); // DISPLAYS NULL [super viewDidLoad]; } Why isn't my variables available in viewdidLoad when I declared and implemented in my .h and .m files? If that's the case, is viewDidLoad or viewWillAppear the common places to load the data for the viewcontroller? It looks like even if you instantiate a viewcontroller, the init function gets called but the viewDidLoad doesn't necessarily have the variables to be displayed. Where's the right place/methods to load the model(data) for my viewcontroller? Thanks in advance

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >