Search Results

Search found 12541 results on 502 pages for 'secure the world'.

Page 64/502 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Constructor Overload Problem in C++ Inherrentance

    - by metdos
    Here my code snippet: class Request { public: Request(void); ……….. } Request::Request(void) { qDebug()<<"Request: "<<"Hello World"; } class LoginRequest :public Request { public: LoginRequest(void); LoginRequest(QDomDocument); …………… } LoginRequest::LoginRequest(void) { qDebug()<<"LoginRequest: "<<"Hello World"; requestType=LOGIN; requestId=-1; } LoginRequest::LoginRequest(QDomDocument doc){ qDebug()<<"LoginRequest: "<<"Hello World with QDomDocument"; LoginRequest::LoginRequest(); xmlDoc_=doc; } When call constructor of Overrided LoginRequest LoginRequest *test=new LoginRequest(doc); I came up with this result: Request: Hello World LoginRequest: Hello World with QDomDocument Request: Hello World LoginRequest: Hello World Obviously both constructor of LoginRequest called REquest constructor. Is there any way to cape with this situation? I can construct another function that does the job I want to do and have both constructors call that function. But I wonder is there any solution?

    Read the article

  • Is it possible to group rows twice in MySQL?

    - by DisgruntledGoat
    I have a table like this: someid somestring 1 Hello 1 World 1 Blah 2 World 2 TestA 2 TestB ... Currently I'm grouping by the id and concatenating the strings, so I end up with this: 1 Hello,World,Blah 2 World,TestA,TestB ... Is it possible to do a second grouping so that if there are multiple entries that end up with the same string, I can group those too?

    Read the article

  • Bash script, read values from stdin pipe

    - by gmatt
    I'm trying to get bash to process data from stdin that gets piped it, but no luck, what I mean is none of the following work: echo "hello world" | test=($(< /dev/stdin)); echo test=$test test= echo "hello world" | read test; echo test=$test test= echo "hello world" | test=`cat`; echo test=$test test= where I want the output to be test=hello world. Note I've tried putting "" quotes around "$test" that doesn't work either.

    Read the article

  • with JQUERY, How to pass a dynamic series of data to the server

    - by nobosh
    What is the recommended way in JQUERY to send a dynamic set of data to the server, the set contains items like: ID: 13 Copy: hello world....hello world....hello world....hello world.... ID: 122 Copy: Ding dong ...Ding dong ...Ding dong ...Ding dong ...Ding dong ... ID: 11233 Copy: mre moremore ajkdkjdksjkjdskjdskjdskjds This could range from 1, to 10 items. What's the best way to structure that data to post to the server with JQUERY? Thanks

    Read the article

  • Command to escape a string in bash

    - by User1
    I need a bash command that will convert a string to something that is escaped. Here's an example: echo "hello\world"|escape|someprog Where the escape command makes "hello\world" into "hello\\world". Then, someprog can use "hello\world" as it expects. Of course, this is a simplified example of what I will really be doing.

    Read the article

  • optimize mod_rewrite in htaccess

    - by clarkk
    I got some mod_rewrite conditions in a .htaccess file which I have extended from time to time.. But I don't think its very well written (I'm still quite new to mod_rewrite) Some times requests end up in infinite loops And just now I added SSL to the file.. When requesting https:// I get a 404 error The requested URL /_secure/_secure/ was not found on this server. Somehow it adds an extra _secure to the path? .htacces # set language RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_URI} ^/(da|en)/(.*)(\?%{QUERY_STRING})?$ [NC] RewriteRule ^(.*)$ /%2?%{QUERY_STRING}&set_lang=%1 [L] # put 'www' as subdomain if none is given RewriteCond %{HTTP_HOST} ^([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.%1/$1 [L,R=301] # rewrite subdomain RewriteCond %{HTTP_HOST} ^(admin|files)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_(admin|files)/ [NC] RewriteRule ^(.*)$ /_%1/$1 [L] # redirect to subdomain RewriteCond %{HTTP_HOST} ^www\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^_([^/]+)/ http://$1.%1/ [L,R=301] # start SSL on 'secure' subdomain if not started RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^(secure)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ https://%1.%2/$1 [L,R=301] # rewrite 'secure' subdomain RewriteCond %{HTTP_HOST} ^(demo|secure)\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_secure/ [NC] RewriteRule ^(.*)$ /_secure/$1 [L] # rewrite 'api' subdomain RewriteCond %{HTTP_HOST} ^api\.[^\.]+\.[^\.]+$ [NC] RewriteCond %{REQUEST_URI} !^/_api/ [NC] RewriteRule ^(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)?(?:([^/]+)/)? /_api/?%{QUERY_STRING}&v=$1&i=$2&k=$3&a=$4&t=$5&f=$6 [L] # redirect non-active subdomain to 'www' RewriteCond %{HTTP_HOST} !^(admin|api|demo|files|secure|www)\.([^\.]+\.[^\.]+)$ [NC] RewriteRule ^(.*)$ http://www.domain.com [L,R=301] # hide file extensions RewriteCond %{HTTP_HOST} ^www\. [NC] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !\.php$ [NC] RewriteCond %{REQUEST_URI} ^/([^/]*)/(?:([^/]*)/)?(?:([^/]*)/)?$ [NC] RewriteRule ^(.*)$ /%1.php?%{QUERY_STRING}&subpage=%2&subsection=%3 [L]

    Read the article

  • Apache VirtualHost, multiple sites. 1 ssl with redirect and 1 regular http

    - by pedalpete
    I've got a server with one site which I am redirecting to https via <VirtualHost *:80> DocumentRoot /var/www/html/secure ServerName secure.com Redirect / https://secure.com </VirtualHost> That works no problem. Now I'm trying to add another non-secure site <VirtualHost *:80> DocumentRoot /var/www/html/notsecure ServerName notsecure.com </VirtualHost> of course, because the redirect is on '/', all sites are getting redicted. I've tried changing the Redirect to the full document root, but no luck.

    Read the article

  • nginx root directory not forwarding correctly

    - by user66700
    The server files are store in /var/www/ Everything was working perfectly, then I've been getting the following errors 2011/01/28 17:20:05 [error] 15415#0: *1117703 "/var/www/https:/secure.domain.com/index.html" is not found (2: No such file or directory), client: 119.110.28.211, server: secure.domain.com, request: "HEAD /https://secure.domain.com/ HTTP/1.1", host: "secure.domain.com" Heres my config: server { server_name secure.domain.com; listen 443; listen [::]:443 default ipv6only=on; gzip on; gzip_comp_level 1; gzip_types text/plain text/html text/css application/x-javascript text/xml text/javascript; error_log logs/ssl.error.log; gzip_static on; gzip_http_version 1.1; gzip_proxied any; gzip_disable "msie6"; gzip_vary on; ssl on; ssl_ciphers RC4:ALL:-LOW:-EXPORT:!ADH:!MD5; keepalive_timeout 0; ssl_certificate /root/server.pem; ssl_certificate_key /root/ssl.key; location / { root /var/www; index index.html index.htm index.php; } }

    Read the article

  • Unable to use strong encryption

    - by user224299
    So I am exploring Apache to create a simple example: the default page and a directory "secure". I everyone to be able to access the server but, when one wants to access the "secure" directory, I the connection to use strong encryption. I am using apache2.4. However this is not working and I don't know why! I have done just like in the Apache tutorial: LoadModule ssl_module /usr/lib/apache2/modules/mod_ssl.so <VirtualHost *:443> SSLEngine on SSLCertificateFile /home/vitorpereira/Desktop/cert.cer SSLCertificateKeyFile /home/vitorpereira/Desktop/key.key </VirtualHost> SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL <Location /var/www/html/secure> SSLCipherSuite HIGH:!aNULL:!MD5 </Location> But this does not work :/ And, I can access the secure folder with http but when I write https, it says not found!

    Read the article

  • Ubuntu 12.04.2 Dual boot UEFI Windows 8 Preinstalled CX21903W Ultrabook

    - by user180782
    Hi i have a problem trying to install ubuntu. The machine is a CX Ultrabook model CX.21903W Intel I5 with 500GB hard disk, 8 GB ram and 32 GB SSD. From Installing Ubuntu on a Pre-Installed Windows 8 (64-bit) System (UEFI Supported), and according to the steps guide: 1 - We create a partition from Win8 (70 GB) from the own win8 program. 2 - Confirm-SecureBootUEFI=True. 3 - From Win8, shift + Restart and from special menu we selected the UEFI Firmware Setting. 4 - From BIOS Option: ------Option 1) Disable Secure Boot. ------Option 2) Disable UEFI (Not Available) from Option 1: Three ways is available. With Secure Boot enable - We can't even boot ubuntu. A red windows saying Soft unproper signed. With Secure Boot disable - and this config in boot device order: ----1: UEFI: USB ----2: Windows Boot Manger ----3: Others and CSM (Compatibility Support Module): enable - GRUB appears and selecting try Ubuntu then a black windows appears and nothing happens. The same result if install ubuntu is selected. With Secure Boot disable - and this config in boot device order: ----1: USB (No UEFI) ----2: Windows Boot Manger ----3: Others and CSM (Compatibility Support Module): enable - GRUB appears and selecting try Ubuntu, - Ubuntu boots and we can install it even. 5 - Rebooting and just changing the boot order as ----1: Ubuntu [] ----2: Windows Boot Manger ----3: Others then nothings happens. 6 - Booting from LiveUSB again and, as per instructed, making Boot-Repair (A warning windows: Ubuntu is working in legacy mode.). 7 - Saving changes and rebooting, Grub works but selecting Ubuntu, a black windows appears and nothing happens. Selecting Win8, Win8 boots and works. Untill now we can't make the ubuntu installation. Any suggestion will be welcomed. kind regards and thanks in advance.

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • The Internet of Things Is Really the Internet of People

    - by HCM-Oracle
    By Mark Hurd - Originally Posted on LinkedIn As I speak with CEOs around the world, our conversations invariably come down to this central question: Can we change our corporate cultures and the ways we train and reward our people as rapidly as new technology is changing the work we do, the products we make and how we engage with customers? It’s a critical consideration given today’s pace of disruption, which already is straining traditional management models and HR strategies. Winning companies will bring innovation and vision to their employees and partners by attracting people who will thrive in this emerging world of relentless data, predictive analytics and unlimited what-if scenarios. So, where are we going to find employees who are as familiar with complex data as I am with orderly financial statements and business plans? I’m not just talking about high-end data scientists who most certainly will sit at or near the top of the new decision-making pyramid. Global organizations will need creative and motivated people who will devote their time to manipulating, reviewing, analyzing, sorting and reshaping data to drive business and delight customers. This might seem evident, but my conversations with business people across the globe indicate that only a small number of companies get it. In the past few years, executives have been busy keeping pace with seismic upheavals, including the rise of social customer engagement, the rapid acceleration of product-development cycles and the relentless move to mobile-first. But all of that, I think, is the start of an uphill climb to the top of a roller-coaster. Today, about 10 billion devices across the globe are connected to the Internet. In a couple of years, that number will probably double, and not because we will have bought 10 billion more computers, smart phones and tablets. This unprecedented explosion of Big Data is being triggered by the Internet of Things, which is another way of saying that the numerous intelligent devices touching our everyday lives are all becoming interconnected. Home appliances, food, industrial equipment, pets, pharmaceutical products, pallets, cars, luggage, packaged goods, athletic equipment, even clothing will be streaming data. Some data will provide important information about how to run our businesses and lead healthier lives. Much of it will be extraneous. How does a CEO cope with this unimaginable volume and velocity of data, much less harness it to excite and delight customers? Here are three things CEOs must do to tackle this challenge: 1) Take care of your employees, take care of your customers. Larry Ellison recently noted that the two most important priorities for any CEO today revolve around people: Taking care of your employees and taking care of your customers. Companies in today’s hypercompetitive business environment simply won’t be able to survive unless they’ve got world-class people at all levels of the organization. CEOs must demonstrate a commitment to employees by becoming champions for HR systems that empower every employee to fully understand his or her job, how it ties into the corporate framework, what’s expected of them, what training is available, and how they can use an embedded social network to communicate, collaborate and excel. Over the next several years, many of the world’s top industrialized economies will see a turnover in the workforce on an unprecedented scale. Across the United States, Europe, China and Japan, the “baby boomer” generation will be retiring and, by 2020, we’ll see turnovers in those regions ranging from 10 to 30 percent. How will companies replace all that brainpower, experience and know-how? How will CEOs perpetuate the best elements of their corporate cultures in the midst of this profound turnover? The challenge will be daunting, but it can be met with world-class HR technology. As companies begin replacing up to 30 percent of their workforce, they will need thousands of new types of data-native workers to exploit the Internet of Things in the service of the Internet of People. The shift in corporate mindset here can’t be overstated. The CEO has to be at the forefront of this new way of recruiting, training, motivating, aligning and developing truly 21-century talent. 2) Start thinking today about the Internet of People. Some forward-looking companies have begun pursuing the “democratization of data.” This allows more people within a company greater access to data that can help them make better decisions, move more quickly and keep pace with the changing interests and demands of their customers. As a result, we’ve seen organizations flatten out, growing numbers of well-informed people authorized to make decisions without corporate approval and a movement of engagement away from headquarters to the point of contact with the customer. These are profound changes, and I’m a huge proponent. As I think about what the next few years will bring as companies become deluged with unprecedented streams of data, I’m convinced that we’ll need dramatically different organizational structures, decision-making models, risk-management profiles and reward systems. For example, if a car company’s marketing department mines incoming data to determine that customers are shifting rapidly toward neon-green models, how many layers of approval, review, analysis and sign-off will be needed before the factory starts cranking out more neon-green cars? Will we continue to have organizations where too many people are empowered to say “No” and too few are allowed to say “Yes”? If so, how will those companies be able to compete in a world in which customers have more choices, instant access to more information and less loyalty than ever before? That’s why I think CEOs need to begin thinking about this problem right now, not in a year or two when competitors are already reshaping their organizations to match the marketplace’s new realities. 3) Partner with universities to help create a new type of highly skilled workers. Several years ago, universities introduced new undergraduate as well as graduate-level programs in analytics and informatics as the business need for deeper insights into the booming world of data began to explode. Today, as the growth rate of data continues to soar, we know that the Internet of Things will only intensify that growth. Moreover, as Big Data fuels insights that can be shaped into products and services that generate revenue, the demand for data scientists and data specialists will go on unabated. Beyond that top-level expertise, companies are going to need data-native thinkers at all levels of the organization. Where will this new type of worker come from? I think it’s incumbent on the business community to collaborate with universities to develop new curricula designed to turn out graduates who can capitalize on the data-driven world that the Internet of Things is surely going to create. These new workers will create opportunities to help their companies in fields as diverse as product design, customer service, marketing, manufacturing and distribution. They will become innovative leaders in fashioning an entirely new type of workforce and organizational structure optimized to fully exploit the Internet of Things so that it becomes a high-value enabler of the Internet of People. Mark Hurd is President of Oracle Corporation and a member of the company's Board of Directors. He joined Oracle in 2010, bringing more than 30 years of technology industry leadership, computer hardware expertise, and executive management experience to his role with the company. As President, Mr. Hurd oversees the corporate direction and strategy for Oracle's global field operations, including marketing, sales, consulting, alliances and channels, and support. He focuses on strategy, leadership, innovation, and customers.

    Read the article

  • AWS EC2: How to determine whether my EC2/scalr AMI was hacked? What to do to secure it?

    - by Niro
    I received notification from Amazon that my instance tried to hack another server. there was no additional information besides log dump: Original report: Destination IPs: Destination Ports: Destination URLs: Abuse Time: Sun May 16 10:13:00 UTC 2010 NTP: N Log Extract: External 184.xxx.yyy.zzz, 11.842.000 packets/300s (39.473 packets/s), 5 flows/300s (0 flows/s), 0,320 GByte/300s (8 MBit/s) (184.xxx.yyy.zzz is my instance ip) How can I tell whether someone has penetrated my instance? What are the steps I should take to make sure my instance is clean and safe to use? Is there some intrusion detection techinque or log that I can use? Any information is highly appreciated.

    Read the article

  • call python with system() in R to run a python script emulating the python console

    - by Yihui
    I want to pass a chunk of Python code to Python in R with something like system('python ...'), and I'm wondering if there is an easy way to emulate the python console in this case. For example, suppose the code is "print 'hello world'", how can I get the output like this in R? >>> print 'hello world' hello world This only shows the output: > system("python -c 'print \"hello world\"'") hello world Thanks! BTW, I asked in r-help but have not got a response yet (if I do, I'll post the answer here).

    Read the article

  • Feedback on "market manipulation", a peripheral game mechanic for a satirical MMO

    - by BerndBrot
    This question asks for feedback on a specific game-mechanic. Since there is not one right feedback on a game mechanic, I tried to provide enough context and guidelines to still make it possible for users to rate answers and to accept an answer as the best answer (following these criteria from Writer.SE's meta website). Please comment if you have any suggestions on how I could improve the question in that regard. So, let's begin with the game itself and some of its elements which are relevant for this question. Context I'm working on a satirical, text-based multiplayer adventure and role-playing game set in modern-day London. The game resolves around the concept of sin and features a myriad of (venomous) allusions to all the things that go wrong in this world. Players can choose between character classes like bullshit artist (consultant), bankster, lawyer, mobster, celebrity, politician, etc. In order to complete the game, the player has to live so sinfully with regard to any of the seven deadly sins that a demon is willing to offer them a contract of sponsorship. On their quest to live a sinful live, characters explore more and more locations of modern-day London (on a GoogleMap), fight "monsters" like insurance sales agents or Jehovah's Witnesses, and complete quests, like building a PowerPoint presentation out of marketing buzz words or keeping up a number of substance abuse effects in order to progress on the gluttony path. Battles are turn based with both combatants having a deck of cards, with which they try to make their enemy give in to temptations of all sorts. Tempted enemies sometimes become contacts (an item drop mechanic), which can be exploited for various benefits, depending on their area of influence (finance, underworld, bureaucracy, etc.), level of influence, and kind of sway that the player has over them (bribed, seduced, threatened, etc.) Once a contract has been exploited, the player loses that contact. Most actions require turns. Turns are limited, but refill each day. Criteria A number of peripheral game mechanics are supposed to represent real world abuses and mischief in a humorous way integrate real world data and events to strengthen the feeling of relevance of the game's humor with regard to real world problems add fun ways of interacting with other players add ways for players to express themselves through game-play Market manipulation is one such peripheral game mechanic and should fulfill all of these goals. Market manipulation This is my initial design of the mechanic: Players can enter the London Stock Exchange (LSE) (without paying a turn) LSE displays the stock prices of a number of companies in industries like weapons or tobacco as well as some derivatives based on wheat and corn. The stock prices are calculated based on the actual stock prices of these companies and derivatives (in real time) any market manipulations that were conducted by the players any market corrections of the system Players can buy and sell shares with cash, a resource in the game, at current in-game market value (without paying a turn). Players can manipulate the market, i.e. let the price of a share either rise or fall, by some amount, over a certain period of time. Manipulating the market requires 1 turn A contact in the financial sector (see above). The higher the level of influence of the contact, the stronger the effect of the manipulation on the stock price, and/or the shorter it takes for the manipulation to manifest itself. Market manipulation also adds a crime to the player's record. (There are a multitude of ways to take care of that, but it is still another "cost" of market manipulations.) The system continuously corrects market manipulations by letting the in-game prices converge towards their real world counterparts at a rate of 2% of the difference between the two per hour. Because of this market correction mechanism, pushing up prices (and screwing down prices) becomes increasingly difficult the higher (lower) the price already is. Whenever food prices reach a certain level, in-game stories are posted about hunger catastrophes happening somewhere far, far away (maybe with links to real world news stories). Whenever a player sells a certain number of shares with a sufficiently high margin, they are mentioned in that day's in-game financial news. Since the number of stock options is very limited, players will inevitably collide in their efforts to manipulate the market in their favor. Hopefully, it will also be a fun side-arena for guilds and covenants to fight each other. Question(s) What do you think of this mechanism given the criteria for peripheral game mechanics that I specified for my game? Do you have any ideas how the mechanic could be improved with regard to these criteria (or otherwise)? Could it be improved to allow for more expressive game-play, or involve an allusion to some other real world madness (like short selling, leveraging, or some other banking magic)? Are there any game-theoretic problems with this mechanic, like maybe certain dominant individual strategies that, collectively, lead to every player profiting and thus eliminating the idea of market manipulation PVP? Also, if you like (or dislike) this question, feel free to participate in the discussion on GDSE meta: "Should we be more lax with regard to SE's question/answer format to make game design questions possible?"

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Do you develop with security in mind?

    - by MattyD
    I was listening to a podcast on Security Now and they mentioned about how a lot of the of the security problems found in Flash were because when flash was first developed it wasdn't built with security in mind because it didn't need to thus flash has major security flaws in its design etc. I know best practices state that you should build secure first etc. Some people or companies don't always follow 'best practice'... My question is do you develop to be secure or do you build with all the desired functionality etc then alter the code to be secure (Whatever the project maybe) (I realise that this question could be a possible duplicate of Do you actively think about security when coding? but it is different in the fact of actually process of building the software/application and design of said software/application)

    Read the article

  • Windows Azure Recipe: Enterprise LOBs

    - by Clint Edmonson
    Enterprises are more and more dependent on their specialized internal Line of Business (LOB) applications than ever before. Naturally, the more software they leverage on-premises, the more infrastructure they need manage. It’s frequently the case that our customers simply can’t scale up their hardware purchases and operational staff as fast as internal demand for software requires. The result is that getting new or enhanced applications in the hands of business users becomes slower and more expensive every day. Being able to quickly deliver applications in a rapidly changing business environment while maintaining high standards of corporate security is a challenge that can be met right now by moving enterprise LOBs out into the cloud and leveraging Azure’s Access Control services. In fact, we’re seeing many of our customers (both large and small) see huge benefits from moving their web based business applications such as corporate help desks, expense tracking, travel portals, timesheets, and more to Windows Azure. Drivers Cost Reduction Time to market Security Solution Here’s a sketch of how many Windows Azure Enterprise LOBs are being architected and deployed: Ingredients Web Role – this will host the core of the application. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Many Java based applications are also being deployed to Windows Azure with a little more effort. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control – this service is necessary to establish federated identity between the cloud hosted application and an enterprise’s corporate network. It works in conjunction with a secure token service (STS) that is hosted on-premises to establish the corporate user’s identity and credentials. The source code for an on-premises STS is provided in the Windows Azure training kit and merely needs to be customized for the corporate environment and published on a publicly accessible corporate web site. Once set up, corporate users see a near seamless single sign-on experience. Reporting – businesses live and die by their reports and SQL Azure Reporting, based on SQL Server Reporting 2008 R2, can serve up reports with tables, charts, maps, gauges, and more. These reports can be accessed from the Windows Azure Portal, through a web browser, or directly from applications. Service Bus (optional) – if deep integration with other applications and systems is needed, the service bus is the answer. It enables secure service layer communication between applications hosted behind firewalls in on-premises or partner datacenters and applications hosted inside Windows Azure. The Service Bus provides the ability to securely expose just the information and services that are necessary to create a simpler, more secure architecture than opening up a full blown VPN. Data Sync (optional) – in cases where the data stored in the cloud needs to be shared internally, establishing a secure one-way or two-way data-sync connection between the on-premises and off-premises databases is a perfect option. It can be very granular, allowing us to specify exactly what tables and columns to synchronize, setup filters to sync only a subset of rows, set the conflict resolution policy for two-way sync, and specify how frequently data should be synchronized Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Could not locate compojure in classpath

    - by Xian
    I am trying the various Getting started examples and I can get a basic hello world example working with basic HTML in the route as such (ns hello-world (:use compojure.core ring.adapter.jetty) (:require [compojure.route :as route])) (defroutes example (GET "/" [] "<h1>Hello World Wide Web!</h1>")) (run-jetty example {:port 8080}) But when I attempt to use the html helpers like so (ns hello-world (:use compojure ring.adapter.jetty) (:require [compojure.route :as route])) (defroutes example (GET "/" (html [:h1 "Hello World"]))) (run-jetty example {:port 8080}) Then I get the following error [null] Exception in thread "main" java.io.FileNotFoundException: Could not locate compojure__init.class or compojure.clj on classpath: (core.clj:1)

    Read the article

  • How to prevent access to website without SSL connection?

    - by CraigJ
    I have a website that has an SSL certificate installed, so that if I access the website using https instead of http I will be able to connect using a secure connection. However, I have noticed that I can still access the website non-securely, ie. by using http instead of https. How can I prevent people using the website in a non-secure manner? If I have a directory on the website, eg. samples/, can I prevent non-secure connections to just this directory?

    Read the article

  • How to print a JTable header in two lines?

    - by Eruls
    Program is to print a JTabel and used function is JTabel jt=new JTable(); MessageFormat headerFormat= new MessageFormat("My World Tomorrow"); MessageFormat footerFormat = new MessageFormat("Page {0}"); jt.Print(JTabel.Format,headerFormat,footerFormat); Query is: How to print the header in two lines that is My World Tomorrow Tired following solutions: new MessageFormat("My world \n Tomorrow"); new MessageFormat("My world \r\n Tomorrow"); new MessageFormat("My world" System.getProperty("line.separator")+"Tomorrow" ); Nothing works.

    Read the article

  • Using RegEx's in Multi-Channel Funnels in Google Analytics

    - by Rob H
    For some reason, I can't get my multi-channel funnel which utilizes RegEx's in the path steps to function -- it keeps coming back with no data. There are a few variables which may be holding things up, but I can't figure out the origin of the problem, nor a solution. Here's the situation: The funnel is tracking conversions, defined as when a user completes 4 steps to signup Steps are not "required" Default URL is set to https://example.com There is a 302 redirect set up on our site that leads from http://example.com to https://example.com Within the funnel, steps switch from non-secure pages (unless browser is set to secure browsing), to secure pages once the user moves from the landing page to the second page of the sign-up process (account placeholder has been created) URL at that point contains the variable of publisher number within (but not at the end) the URL My RegEx's are all properly written as tested on rubular.com

    Read the article

  • label in my tabel celll

    - by ven in Iphone world
    hi this is lak.. thank you for your fast feedback but that not worked for me i am using labels in table.. UILabel *label1 = (UILabel *) [cell viewWithTag:1]; label1.backgroundColor = [UIColor clearColor]; label1.text=aStation.station_name; label1.textColor = [UIColor colorWithRed:0.76 green:0.21 blue:0.07 alpha:1.0]; [label1 setFont:[UIFont fontWithName:@"Trebuchet MS" size:15]]; for this type of label i want to limit number of characters. hope i will get an answer..

    Read the article

  • Vertically Merge Multiple Tables in MySQL by Joint Primary Key

    - by world
    Hello, I'll attempt to make my question as clear as possible. I'm fairly unexperienced with SQL, only know the really basic queries. In order to have a better idea I'd been reading the MySQL manual for the past few days, but I couldn't really find a concrete solution and my needs are quite specific. I've got 3 MySQL MyISAM tables: table1, table2 and table3. Each table has an ID column (ID, ID2, ID3 respectively), and different data columns. For example table1 has [ID, Name, Birthday, Status, ...] columns, table2 has [ID2, Country, Zip, ...], table3 has [ID3, Source, Phone, ...] you get the idea. The ID, ID2, ID3 columns are common to all three tables... if there's an ID value in table1 it will also appear in table2 and table3. The number of rows in these tables is identical, about 10m rows in each table. What I'd like to do is create a new table that contains (most of) the columns of all three tables and merge them into it. The dates, for instance, must be converted because right now they're in VARCHAR YYYYMMDD format. Reading the MySQL manual I figured STR_TO_DATE() would do the job, but I don't know how to write the query itself in the first place so I have no idea how to integrate the date conversion. So basically, after I create the new table (which I do know how to do), how can I merge the three tables into it, integrating into the query the date conversion?

    Read the article

  • Can not install Ubuntu 12.04 or 12.10 on Toshiba qosmio x870. Please help!

    - by Mighty
    I have a new Toshiba qosmio x870 and for the past one week I have been trying to install Ubuntu 12.04 from a USB and Live CD without success. I keep on getting this error: Boot failure: a proper digital signature was not found. One or more files on the selected boot device was rejected by the Secure Boot feature. I even tried installing Ubuntu with the Windows installer. After installation and I reboot the PC, first I see the error that points to: \ubuntu\winboot\wubildr.mbr Status: 0xc000007b Info: The OS couldn't be loaded because a required file is missing or contains errors. When I restart, that the previous error doesn't show up and I see both Windows 8 and Ubuntu (happy that I was successful) but when I click on Ubuntu, it flags an error. This is the first time I'm having a Secure Boot-capable PC. What will be the danger in disabling the secure boot? I'll be happy if I can get assistance from anyone.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >